Data augmentation is often used to achieve SOTA performance in machine learning, but classic augmentation methods, which randomly sample a parameterized space of transformations, are limited in their expressivity.  In contrast, adversarial examples (perturbations of inputs intended to fool a classifier) may not represent real world transformations (e.g. translations or rotations that capture movement). In this paper, Luo et al. explores a method to produce structured transformations of inputs, which could occur in natural data. Their approach uses constraints that impose specific structures on the set of perturbations and then selects those transformations that maximally increase the loss of the model. They demonstrate that this approach can improve generalization.