To improve the robustness of machine learning models, several researchers recommend distributionally robust optimization (DRO), wherein models are trained to perform well on a collection of data distributions (the uncertainty set). However, model developers must analytically define the uncertainty set based on what they think constitutes a possible test distribution. To improve the DRO procedure, Michel et al. propose using parametric generative models to design a more flexible and problem-specific uncertainty set. Specifically, they present a relaxation of the KL-constrained inner maximization objective so that gradient-based optimization of large-scale generative models can be applied to find the best uncertainty set. In addition, they put forth a set of model selection heuristics that can guide hyperparameter search.