Many ML projects fail – even after a model is successfully deployed – because the model performance degrades when the data distribution shifts. Such degradations may result from shortcut learning, wherein a model relies upon easily represented (but spurious) features that predict the outcome of the training data but are not predictive when the input distribution changes. Makar et al. study the development of models that are robust to interventions that change the correlation between the spurious features (i.e., that should not influence the prediction) and the labels. They propose a method that uses auxiliary labels (identifying spurious features) and techniques adapted from causal inference to develop more robust models.