Luma.AI Democratizes Virtual Production

The Challenge of Asset Creation

One of the major pain points for virtual production is photoreal environment creation for in-camera VFX usage on LED volumes. Typically, this is done either by highly skilled designers, by leveraging existing assets such as the Unreal Engine Marketplace, via photogrammetry, or by combining approaches. But these methods are often complex and costly to achieve camera-ready imagery. 

Luma.AI is an ambitious startup with an application aimed at making photorealistic 3D environment creation as straightforward as taking photos with an iPhone.

Luma’s Solution

Luma AI’s co-founder Amit Jain brings years of experience in computer vision and machine learning to the table. “My partner Alex Yu and I were thinking about what the future would look like as computing becomes more 3D and spatial,” recalls Jain. “Instead of looking at flat planes, our computers can show us the same spatial 3D as the real world. So, the company’s genesis was our realization that if we wanted to make that 3D world more real, we would need to build the infrastructure and technology so anyone can create photorealistic, high-fidelity imagery.”

Captured and rendered with Luma.AI.

I had the opportunity to try Luma’s capture app via its beta program. Although the initial use case for the software and service is 3D for VR/MR, I could quickly see the utility of the results for virtual production. The technology appears analogous to photogrammetry, requiring a series of reference images to recreate an object or scene in 3D. But it’s actually much more advanced– Luma uses Neural Radiance Fields or NeRFs to achieve its results. 

The Power of NeRF

Jain offers a concise explanation of the difference between photogrammetry and NeRFs. “Photogrammetry takes many photos of an object and looks for correspondences between them to reconstruct geometry and texture,” he notes. “But because there is so much reflection, transparency, and translucency in the real world, you lose a lot of detail in the process, even with thousands of reference images. And even with all that effort, you get a diffuse scene requiring lots of effort to make it look realistic again.” 

“NeRFs use a neural network to process reference images and sample light, the same way humans perceive it in the real world,” Jain continues, “We can reconstruct far more complex objects with much greater photorealism than photogrammetry. With our app on an iPhone, you can often achieve better results in a couple of minutes than professionals can do with photogrammetry over days and days of work.”

NeRFs explained in detail.

A bold claim indeed, but looking at the gallery on Luma’s website and the results I achieved on my first attempts, it’s highly accurate. “The secret sauce, if you will, happens in the cloud where we do all our processing,” reveals Jain. “The app collects the imagery and ensures that camera parameters like exposure, focus, and blur detection are correct. The actual reconstruction is happening in the cloud– making the textures sharp and the geometry accurate.  It’s a lot of engineering and research that our team has worked on endlessly for the past year, so it’s a labor of love that you see in the app.”

Noah gives Luma.AI a quick test on his LED volume.

Visualizing the Future

Luma’s roadmap includes a plugin for Unreal Engine and other digital content creation tools to enable the direct import of environment and object captures into scenes. It promises to unlock new workflows for virtual production for LED volume and in-camera VFX and related tasks like virtual scouting. 

“One of our goals is to make virtual production much more accessible and useful to more people,” Jain reveals. “Now anyone can visit a real location, quickly capture a photorealistic 3D environment and share it with their production team. Anyone can play with it on their laptop and discover camera angles. And that can be the starting point for the imagery you put onto your LED volume. You can use it as the reference for a more complex 3D environment or use it directly, depending on your budget.”

Captured and rendered with Luma.AI.

Now anyone can visit a real location, quickly capture a photorealistic 3D environment and share it with their production team.

Luma.AI Co-Founder Amit Jain

The Next Steps

Befitting a startup, Luma has the agility and openness to adopt new use cases when they emerge. “As we partner with production studios and artists working in this domain, we will listen and build the tools and integrations they need to make their workflows possible,” Jain says. “We’re growing quickly, and we’re distributed, with members of our team in Silicon Valley, Europe, India, and Canada. We think this concept has a lot of potential.”

To learn more about Luma AI, please visit their website at https://lumalabs.ai/. If you’d like to join their beta program and try out the app for yourself, please connect with Amit Jain directly at hello@lumalabs.ai

Captured and rendered with Luma.AI.

-Presented By-