Although new platforms make model building and training easier, many efforts to operationalize machine learning models are bottlenecked by model and prediction serving. Prediction serving typically involves complex computations, requires low latency, and involves passing a single request through multiple stages. While tools like AWS Sagemaker and AzureML address prediction serving by deploying models as separate microservices, this approach limits end-to-end performance optimization and is harder to debug. In contrast, Sreekanti et al. propose Cloudflow, a real-time prediction serving system designed around a simple dataflow API with familiar functional operators that wrap black-box models. Cloudflow is built on an autoscaling serverless back-end to enable efficient and automatically optimized dataflows.