Self-supervised learning approaches are designed to learn representations that are invariant to distortions of the input samples. However, most existing methods for achieving invariance require careful implementation to work effectively. To address this limitation, Zbontar et al. propose Barlow Twins, which avoids collapsed representations without requiring large batches or asymmetric mechanisms. Barlow Twins, a simple self-supervised method to train joint embedding architectures non-contrastively, is based on H. Barlow’s redundancy reduction principle – which explains how human visual systems encode information compactly.