For machine learning to reach its full potential, it must be deployed not only on-prem and in the cloud, but also at the edge. However, enabling ML on embedded devices (e.g. for applications like wake word detection, predictive maintenance) is challenging because developers must manually optimize models for each hardware backend and lack tools to objectively evaluate hardware performance. To address this bottleneck, David et al. present TensorFlow Lite Micro, an ML framework for embedded systems, which applies an interpreter-based approach to optimize portability, flexibility, and extensibility. TF Micro is designed to facilitate training and deployment to embedded targets while also enabling hardware vendors to optimize on a per-kernel basis.