Machine learning should be impacting practically every aspect of our lives…but it’s not yet. While new tools facilitate model development and training and new hardware accelerates inference, it’s still incredibly hard to operationalize state-of-the-art models and nearly impossible to deploy these models on heterogeneous hardware.
As such, only the largest tech companies, with access to nearly limitless compute and AI expertise, can benefit from recent AI advances. Without an easier way to make machine learning models talk to hardware, there will be no Cambrian explosion in AI.
We invested in OctoML because they radically change this dynamic by providing a universally accessible bridge between AI software and hardware.
With OctoML, which leverages TVM – an automatic and extensible deep learning compiler and runtime – any organization can simply, securely, and efficiently deploy any model on any hardware backend. Additionally, by providing hardware vendors with the ability to automatically generate and optimize tensor operations, OctoML enables any chip manufacturer to support any, including the latest and greatest, model architectures. Just as Linux enabled a new generation of mobile and embedded products, internet services, and platforms in the 90s, so too will OctoML unlock the machine learning market in the coming decade. Put bluntly, we believe that OctoML will be the Machine Learning Operating System for emerging AI chips and other ML-focused devices.
But why do we have such high conviction in this bold and audacious vision? People, people, people…including the OctoML team and the TVM community. The OctoML founders have the academic and commercial pedigree that might make some Nobel Laureates blush – they are heralded by their colleagues for their intellectual acuity and for their meaningful contributions across machine learning, distributed systems, and programming languages. While their technical expertise are unrivaled, they’re also acknowledged by friends and colleagues as highly empathetic and product and community-focused. We’ve built a relationship with the OctoML founders for over two years – well before they started the company. Based on these experiences, we strongly believe that this is the right team to create and deliver a universal programming framework for machine learning, independent of the underlying hardware architecture.
And they’re not alone. In addition to the team of 20 that they’ve hired in just 6 months, they’re also supported by a vibrant and active developer community that they continue to inspire and support. The TVM community includes 340+ contributors, from companies including Facebook, Microsoft, Amazon, ARM, Qualcomm, and Xilinx, and more than 200 community members who attended the sold-out TVM conference last December. Community members are using TVM in deep learning cloud optimization services, accelerator support, and for automatic optimization on mobile devices. In fact, every Alex wake-up today across all devices uses a model optimized with TVM!
At Amplify, we focus on helping amazing technical teams revolutionize industries by transforming projects into companies. We saw a phenomenal team, with an incredibly exciting project, and the potential to drastically impact machine learning development by enabling the deployment of machine learning everywhere- and just knew we had to be a part of it. AI is not eating software yet – but with the help of the OctoML team, it will very soon.
Congratulations to the OctoML team on raising their $15M Series A round, led by Amplify Partners and our friends at Madrona Venture Group!