Luma Series A

Such a life, with all vision limited to a Point, and all motion to a Straight Line, seemed to me inexpressibly dreary. – Edwin A. Abbott, Flatland: A Romance of Many Dimensions

Luma lets anyone capture and edit reality in 3D. So I'll skip the cliche lede for funding announcements about how we're "thrilled to invest in [company name]'s [financing round]" and just let the product speak for itself. 

I hope it's now clear why we're thrilled to double down on our seed investment by leading Luma Lab’s Series A (apologies, but I’m obligated to use this phrase according to custom).

To best introduce what makes Luma special, let’s turn to a bit of history.  

The tale of art and media is as much a story of technical achievements as an aesthetic one. On the surface, art is a psychodrama played out over millenia with a cast of renowned and occasionally unhinged creators. Weaving its way through these famous biographies, however, is a less celebrated, but equally important story of technical innovation. From the invention of perspective to the first camera obscura, new tools expanded the range of representation.

With an ever expanding toolset, artists have recapitulated reality in as many styles as there are museum wings. These styles rarely emphasized accuracy and those that did could only approximate it. Some (like Vermeer) tried their hand at photorealism, but limited to an output measured in frames per year. 

Until the invention of the camera, accuracy was fundamentally gated by the noisy distortions of the human nervous system, the game of telephone that starts with visual perception and ends in the movement of the artist's hands. All that changed in the 19th century, with the debut of the modern camera, and with it, near-instant verisimilitude. Accuracy, even if not often prized in art, was at least now possible. But this newfound accuracy, on closer inspection, isn’t quite…accurate. 

The problem is our world is (at least) three-dimensional, but cameras collapse it down to two, projecting the world onto a flat surface. Film, too, is just the sum of many photographs per second. As a consequence, the particular point of view a camera takes is the only point of view it records. The rest of the scene, the infinity of its possible perspectives and depth, is lost to time. 

To capture the world in all its dimensions would require fundamentally rethinking the role of the camera. More generally, it meant reframing it from an optics problem to a data problem. And nowadays, data problems are machine learning problems, patiently awaiting the arrival of a neural network. The specific breakthrough in this case came in the form of a new class of neural networks called Neural Radiance Fields (or NeRFs). 

The magic of NeRF is to turn cameras from relatively dumb recorders into data samplers. Instead of just recording, the camera intelligently samples the scene, parameterizes it and constructs a differentiable 3D representation of the world. Other approaches, like photogrammetry, tried their hand at this problem, but NeRFs are the first to provide an increasingly picture-perfect representation of our three-dimensional world. NeRFs are the new foundation of technologies that will make 3D graphics learnable, differentiable, generative: This is the heart of Luma. 

What you get with Luma is not just pretty (and accurate), it’s programmable. The upshot of this is, to borrow a phrase, camera as code. The real camera work happens after the 3D capture. In fact, Luma lets you conjure virtual cameras out of thin air, with arbitrary control over their parameters (focal length, roll, pitch, yaw and whole trajectories). You can now create seemingly impossible shots. 

This anticipates a world where anyone with an iPhone can be a cinematographer. Imagine you could declaratively prompt your scene to have the signature David Fincher style where the camera echoes the actor?

Or instead of using a dolly zoom with a camera on a physical rail, recreate the effect with a few clicks.

So, Luma makes cameras programmable. But what about the scenes themselves?  They too will soon become editable. The particular way NeRFs reconstruct reality means that every aspect of that scene will one day be malleable to full creative control. 

In the same way generative AI lets us play with images using just a prompt and some imagination, Luma will do for entire scenes and objects, in other words, the rich 3D world we live in. Luma users will soon become VFX artists, able to bend the laws of physics and reality itself to their will.

Shrinking a Hollywood studio into an iPhone app won't just change media, it will transform core elements of the economy from e-commerce and ads to gaming and social media. Someone selling from their garage can produce product videos on their iPhone that rival mega-budget ad agencies, a game creator could bring the real world into their game as an asset, to say nothing of AR/VR when its time comes. We’ve barely scratched the surface of creative (and commercial) possibilities this unlocks. 

Our excitement about this expansive vision is matched by our conviction in the team building it. Co-founders Amit Jain and Alex Yu bring a rare combination of deep technical talent and product vision. Their team is as impressive: Stellar engineers with expertise in systems engineering, rendering, and product, in addition to researchers who've made core contributions around generative AI and NeRFs.   

Welcome to Amit, Alex and the rest of the Luma team!

Funding News

Our Investment in Gantry

With a platform like Gantry, ML teams can finally build great ML products quickly and reliably. They can ship ML-driven applications sooner – knowing that they can rapidly integrate user and other feedback that will drive performance gains.

By Sarah Catanzaro

June 7, 2022

Funding News

Our Series A Investment in Runway

Since the company’s inception, the Runway team and their community have explored the many ways wherein ML can impact the creative process.

By Sarah Catanzaro

December 16, 2020