In this blog post, Chenggang Wu explores some of the challenges involved in operationalizing ML models in the context of creating an Emoji Slackbot. While the process of building a model was fairly straightforward (although he had to debug a PyTorch implementation of a pre-trained model), setting up an always-on, scalable prediction service was way more challenging. While he thought that services like AWS SageMaker might streamline this process, it was far more complicated than he anticipated and required him to use several infrastructure management and developer tools that most data and research scientists are not familiar with. Although Algorithmia provided a more streamlined developer experience, integrating with Slack was still tough. He concludes that a lot of effort will be required to enable more scalable, performant, and easy prediction serving.