To make scientific advancements, researchers must often use small-scale experiments, which they then scale to larger, high-impact applications. Although MNIST can be used by machine learning researchers to run quick, cheap studies, it is still relatively large and difficult to hack. In addition, it does not differentiate between linear, nonlinear, and translation-invariant models well. To address these shortcomings and enable rapid iteration in ML research, Sam Greydanus has open-sourced MNIST-1D, a smaller dataset that discriminates between models more effectively.