When building supervised machine learning models, developers must carefully decide how to represent the data used to make predictions. More recent approaches attempt to learn the optimal representation, which will enable the model to generalize well. However existing methods may not support generalization or may be agnostic to the classifier used. Dubois et al. suggest that it is possible to enforce generalization by changing the representation of inputs such that empirical risk minimizers perform well regardless of the complexity of the functional family. They introduce the decodable information bottleneck (DIB) objective, which ensures that classifiers in a functional family can predict labels, but cannot distinguish examples with the same label.