Model Architectures¶
This section provides detailed documentation for all model architectures in the project. Our models are organized into several categories:
- Self-Supervised Learning: Pre-training architectures for learning representations without labels
- Encoders: Architectures for extracting representations from data
- Classifiers: Architectures for classifying data and modeling downstream tasks
Base Components¶
The foundational interfaces and base classes that models build upon.
src.utils.model_weights
¶
PretrainedWeightsMixin
¶
Mixin class for loading pretrained weights with intelligent key matching.
Source code in src/utils/model_weights.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | |
load_pretrained_weights(weights_path, strict=False, missing_key_threshold=0.1)
¶
Load pretrained weights with intelligent key matching.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
weights_path
|
str
|
Path to the pretrained weights file |
required |
strict
|
bool
|
Whether to strictly enforce matching keys |
False
|
missing_key_threshold
|
float
|
Maximum allowed percentage of missing keys |
0.1
|
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If loading criteria are not met |
Source code in src/utils/model_weights.py
Our models implement a sophisticated weights management system through the PretrainedWeightsMixin. This system is designed with several key principles:
- Robustness: Gracefully handle different weight file formats and structures
- Flexibility: Support partial loading and key matching
- Safety: Validate shapes and provide meaningful errors
- Transparency: Detailed logging of the loading process
The mixin provides intelligent weight loading with features like: - Automatic nested state dict extraction - Configurable missing key tolerance - Shape validation - Detailed loading statistics - Extensible key matching logic
Usage Example
Design Philosophy
The mixin is designed to solve common issues in deep learning weight management:
- Versioning: Models evolve, but weights should remain usable
- Flexibility: Support both exact and partial loading
- Debugging: Clear feedback about what was loaded
- Safety: Prevent silent failures with shape mismatches
- Extensibility: Easy to customize key matching logic
src.models.encoder_interface
¶
EncoderInterface
¶
Bases: ABC
Interface for neural network encoders that extract features from input data.
Source code in src/models/encoder_interface.py
forward_features(x, localized=False)
abstractmethod
¶
Extract features from input tensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
Tensor
|
Input tensor |
required |
localized
|
bool
|
Whether to return localized features instead of global features |
False
|
Returns:
| Type | Description |
|---|---|
Tensor
|
Tensor of extracted features |
Source code in src/models/encoder_interface.py
The encoder interface abstracts the data preprocessing pipeline and provides a unified interface for all encoders in the project that can be used to extract features. This is mainly useful for the generation of representations and was specifically designed for this.
Self-Supervised Learning¶
Our self-supervised learning implementations are based on state-of-the-art approaches adapted for medical data.
src.models.mae
¶
Implementation Credits
Our MAE implementation is based on:
- Original paper: "Masked Autoencoders Are Scalable Vision Learners" by He et al.
- Code adapted from Turgut et al.'s MAE implementation, which is a fork of Facebook Research's MAE
- Adapted for time series data with 1D signal masking and ECG-specific components
src.models.mae_lit
¶
LitMAE
¶
Bases: MaskedAutoencoderViT, LightningModule
Source code in src/models/mae_lit.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 | |
ncc(data_0, data_1)
¶
Zero-Normalized cross-correlation coefficient between two data sets
Zero-Normalized cross-correlation equals the cosine of the angle between the unit vectors F and T, being thus 1 if and only if F equals T multiplied by a positive scalar.
Parameters¶
data_0, data_1 : tensors of same size
Source code in src/models/mae_lit.py
src.models.sim_clr
¶
SimCLR
¶
Bases: LightningModule
Lightning module for imaging SimCLR.
Alternates training between contrastive model and online classifier.
Source code in src/models/sim_clr.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 | |
configure_optimizers()
¶
Define and return optimizer and scheduler for contrastive model and online classifier. Scheduler for online classifier often disabled
Source code in src/models/sim_clr.py
forward(x)
¶
on_validation_epoch_end()
¶
Log an image from each validation step using the appropriate logger.
Source code in src/models/sim_clr.py
training_step(batch, _)
¶
Alternates calculation of loss for training between contrastive model and online classifier.
Source code in src/models/sim_clr.py
128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 | |
validation_step(batch, _)
¶
Validate both contrastive model and classifier
Source code in src/models/sim_clr.py
Implementation Credits
Our SimCLR implementation is based on:
- Original paper: "A Simple Framework for Contrastive Learning of Visual Representations" by Chen et al.
- Code adapted from MMCL-ECG-CMR by Turgut et al.
Encoders¶
Model definitions mainly designed to obtain representations.
src.models.ecg_encoder
¶
ECGEncoder
¶
Bases: PretrainedWeightsMixin, EncoderInterface, VisionTransformer
Source code in src/models/ecg_encoder.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 | |
vit_patchX(**kwargs)
¶
Function to create Vision Transformer conforming to the pre-trained weights by Turgut et. al (2025)
Source code in src/models/ecg_encoder.py
Implementation Credits
The classifier also bases on the MAE implementation by Turgut et al. as outlined in the MAE section.
src.models.cmr_encoder
¶
CMREncoder
¶
Bases: PretrainedWeightsMixin, EncoderInterface, Module
Source code in src/models/cmr_encoder.py
Implementation Credits
The encoder also bases on the MMCL-ECG-CMR implementation by Turgut et al. as outlined in the MMCL-ECG-CMR section.
Classifiers¶
Model definitions mainly designed to classify data. Generally, these models can be extended to perform any kind of downstream task.
src.models.ecg_classifier
¶
ECGClassifier
¶
Bases: PretrainedWeightsMixin, MetricsMixin, VisionTransformer, LightningModule
Source code in src/models/ecg_classifier.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 | |
attention_forward_wrapper(attn_obj)
staticmethod
¶
Modified version of def forward() of class Attention() in timm.models.vision_transformer
Source code in src/models/ecg_classifier.py
forward_features(x)
¶
x: [B=N, L, D], sequence
Source code in src/models/ecg_classifier.py
random_masking(x, mask_ratio)
¶
Perform per-sample random masking by per-sample shuffling. Per-sample shuffling is done by argsort random noise. x: [N, L, D], sequence
Source code in src/models/ecg_classifier.py
random_masking_blockwise(x, mask_c_ratio, mask_t_ratio)
¶
2D: ECG recording (N, 1, C, T) (masking c and t under mask_c_ratio and mask_t_ratio) Perform per-sample random masking by per-sample shuffling. Per-sample shuffling is done by argsort random noise. x: [N, L, D], sequence
Source code in src/models/ecg_classifier.py
test_step(batch, _)
¶
Test step for downstream task.
Source code in src/models/ecg_classifier.py
training_step(batch, _)
¶
Training step for downstream task.
Source code in src/models/ecg_classifier.py
validation_step(batch, _)
¶
Validation step for downstream task.
Source code in src/models/ecg_classifier.py
Implementation Credits
The classifier also bases on the MAE implementation by Turgut et al. as outlined in the MAE section.
src.models.cmr_classifier
¶
CMRClassifier
¶
Bases: CMREncoder, MetricsMixin, LightningModule
Source code in src/models/cmr_classifier.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 | |
configure_optimizers()
¶
Configure optimizer for classification task.
Source code in src/models/cmr_classifier.py
forward(x)
¶
test_step(batch, _)
¶
Test step for classification.
Source code in src/models/cmr_classifier.py
training_step(batch, _)
¶
Training step for classification.
Source code in src/models/cmr_classifier.py
validation_step(batch, _)
¶
Validation step for classification.
Source code in src/models/cmr_classifier.py
Implementation Credits
The classifier also bases on the SimCLR implementation by Turgut et al. as outlined in the SimCLR section.
Utility Models¶
Helper models for evaluation and feature extraction. These models are not part of the main model architecture and are not meant to be used directly but rather as building blocks for other models.
src.models.linear_classifier
¶
LinearClassifier
¶
Bases: Module
Simple linear classifier that is a single fully connected layer from input to class prediction.
Source code in src/models/linear_classifier.py
init_weights(m, init_gain=0.02)
¶
Initializes weights according to desired strategy