Although most machine learning systems perform better when trained on larger datasets, privacy and confidentiality policies and regulations may prohibit organizations from combining their datasets into larger corpora that would benefit these ML systems. However, neither federated learning nor differential privacy enable ML model development in settings where both confidentiality and privacy must be preserved. In contrast, Choquette-Choo et al. propose Confidential and Private Collaborative (CaPC) learning, which applies secure multi-party computation (MPC), homomorphic encryption (HE), and privately aggregated teacher models to allow model developers to collaborate without explicitly joining their training. CaPC also enables model developers to improve their own local model instead of training a central model.