Contrastive learning is a self-supervised method for learning rich feature representations by minimizing and maximizing the distance between embeddings of similar and distinct objects, respectively. These representations boost performance on downstream tasks when supervised data is limited. COLA is a contrastive learning method developed at Google Research that predicts whether snippets of audio data are from the same recording. The repository includes both Python and Tensorflow implementation of the model architecture described in the paper. COLA should make it easier to pre-train and fine-tune audio embeddings, thereby improving performance on speech recognition, music categorization, and more.