To develop unsupervised representation learning models, researchers must often handcraft “views” (i.e. transformed versions of an input). In contrast Alex Tamkin, Mike Wu, and Noah Goodman propose Viewmaker networks, a modality-agnostic approach to learn views for unsupervised representation learning. Viewmaker networks are trained jointly with the encoder network through a constrained adversarial training method to output a stochastic perturbation that is added to the input. This approach preserves useful input features for the encoder while reducing the mutual information between different views.