Recently, researchers have developed self-supervised learning methods that do not require explicit annotations. However, these approaches rely upon pretext tasks (e.g., contrastive learning), which requires the model developer to specify data augmentations that create semantically similar and dissimilar examples. These augmentation techniques are domain-specific and can be challenging to develop for new domains. As such, Verma et al. propose DACL, which uses Mixup noise to create positive and negative samples for contrasting learning. They demonstrate that DACL can be applied to tabular data, images, graphs, and other domains.