Interpretability is a requirement for several machine learning model developers due to safety, fairness, and reliability concerns among others. However, explanations of black box models may be unreliable. As such, some researchers have advocated for the development of ML models that are inherently interpretable. In this paper, Rudin et al. present a formal definition and set of principles for interpretable ML and explore both modern and classical challenges in developing such models. Among other challenges, the authors discuss building sparse models for tabular data and scoring systems, incorporating physics or causal constraints, and dimension reduction