Elasticsearch can reindex nearly one billion documents in one hour. SQLite can scale to nearly four million queries per second on a single server. Druid can ingest over 100 billion new rows per day. With such powerful data management platforms, how do data teams move data today?

Most adopt a data orchestration stack with separate tools for ELT, data modeling and transformation, metrics computation, and reverse ETL. They often also add language-specific tools for data scientists and engineers who prefer Python, a scheduling framework for workflows with complex dependencies, and an execution engine to run jobs at scale.

The bottom line: data practitioners usually need 4-8 tools just to move data into and out of the data warehouse. Instead of implementing new analytical and operational use cases, many data teams are consumed with building and managing data pipelines. 

Meanwhile, FAANG and recently IPO’ed tech companies are blogging about their real-time data applications like dynamic pricing and inventory management. They’re posting about how they transitioned to decentralized data infrastructure and delta architectures to optimize performance, improve developer productivity and enable remote work. 

It’s hard for most data practitioners to read about these feats as they stare mindlessly at half a dozen consoles trying to untangle their burgeoning DAG of DAGs. How can they compete if they must spend hours perusing the documentation for Django, Singer, Airflow, and Docker just to power a batch-based BI dashboard?

Meroxa is the answer. Named after the Merox process, which is used by oil refineries to make products like jet fuel, Meroxa makes it easy to extract value from data (the “new oil”). Meroxa streamlines the process of moving data from any source to sink in real-time, thereby unifying data orchestration and obviating the need for point products with unique configuration profiles and operational nuances. Today, the platform includes a change data capture service, schema registry, event streaming service, API proxy, and incident automation framework to build fast, reliable, scalable data pipelines in minutes. 

Unlike other batch tools that sacrifice speed and performance for simplicity, Meroxa’s easy-to-use streaming data platform will grow with data teams as they build data-intensive applications that rival those from tech behemoths. Built upon the latest technologies in distributed systems and data management, Meroxa empowers its users to embrace a future wherein machine learning systems are online; dashboards update every second; blazingly fast services rely on multiple specialized datastores. 

This vision may seem bold but years ago, a similar startup promised to drastically simplify developer workflows by eliminating many steps needed to deploy and scale web applications. DeVaris Brown and Ali were part of the team at Heroku that executed that promise. However, during their tenure at Heroku, they observed an increasing number of teams struggling to deploy and scale data-driven applications. As brilliant technologists and empathetic tool builders, they knew they had to take the headache out of deploying and managing real-time data pipelines. 

We can’t imagine a better team to transform the way that data is operationalized, so we’re so glad that Adam Gross, the former CEO of Heroku, connected us to DeVaris and Ali nearly 1.5 years ago. It was clear in our first meetings that DeVaris and Ali had the skills and experience to build best-in-class developer tools and experiences. Our subsequent diligence confirmed that data and engineering teams need a Heroku-like platform to easily build and scale streaming data pipelines. Since our initial seed investment, DeVaris and Ali have recruited an exceptionally talented technical and GTM team; executed their roadmap swiftly yet meticulously; and engaged multiple stakeholders at companies big and small. 

We’re grateful to support Meroxa as they launch their platform today. We’re confident that Meroxa will enable data teams to respond to urgent demands (e.g. syncing production databases and data warehouses/lakes) while also unlocking long-term strategies (e.g. unifying operational and analytical data infrastructure). Ultimately, we expect that Meroxa will have a lasting impact on how data teams work and how data products are created and we feel honored to back them on this journey.