Batch normalization (BN) enables model developers to use higher learning rates, thereby accelerating deep network training; BN mitigates internal covariate shift by normalizing layer inputs through standardization with batch statistics. However, this method does not correct the impact of confounding and/or bias variables that can skew the distribution of learned features. To minimize the effects of confounding and/or bias variables, Lu et al. propose Metadata Normalization (MDN), an operation for deep learning architectures within a network, which corrects the distribution of features throughout the training process. MDN applies regression analysis to remove the effects of confounding/bias variables from the intermediate features of a network.