Falcon summarizes a few key approaches to contrastive, self-supervised learning, in which representations are extracted from unlabeled data by forming contrasting positive and negative pairs of inputs. He describes CPC, which splits an augmented image into subpatches where “positive” pairs are subpatches that belong to the same image, and AMDIM, which applies augmentation multiple times to each image and “positive” pairs are two such versions of an image. Falcon compares these and other methods on their similarity measure, loss function, representation extraction, etc., and introduces one final framework (YADIM) that combines the data augmentation pipelines of CPC and AMDIM.