Lytro’s Light Field Production Methodology

Producing cinematic VR presents numerous challenges; as an example, all current cameras and production systems available today rely heavily on stitching to merge cameras, which introduces seams, warping and distortion. The challenges grow further when trying to produce stereoscopic or omni-stereo 360 video. Typical stereoscopic cameras and systems for VR produce left/right stereo imagery in limited sweet spots, on the horizon only; as you rotate your head left to right, you perceive stereo only in those sweet spots. But if you tilt your head, or look up or down, the stereo effect breaks.

With Lytro Immerge, we’re doing cinematic capture and post-production in a unique way rooted in Light Field technology. Our system produces live action with a full six degrees of freedom (6DoF), true stereo perspective and accurate parallax in any direction, including view dependent highlights and reflections. VR content delivered via 3D game engines has enormous presence because the game engine provides true stereo in every direction yet lacks the ability to accurately depict  real world light behavior such as subsurface scattering. Lytro Immerge delivers this level of presence but with live action video and all the natural light characteristics. Additionally our live action VR can be seamlessly integrated with film-quality CG elements, producing unrivaled VR experiences with high immersion and high realism.

The Lytro Immerge camera captures an entire 360° scene via five 72° rotations.

Our Lytro Immerge camera is a 95 element planar Light Field array with a 90° field of view. We capture the environment in five separate wedges. To capture a full 360° scene we rotate the camera five times at 72° per wedge. This produces 475 individual camera views per frame. We typically capture a static clean plate for the background and do a second pass to capture whatever live action wedges the story requires. The captured imagery is pre-processed to create RGB images, depth-maps and a virtual camera rig. These assets are then handed over to the post-production team.

Due to the number of individual camera views we capture, our process requires more rendering and processing than a typical cinematic VR project, however there are important advantages to working with Lytro Immerge. The Lytro Immerge original source material has even, balanced color, the depth channel with real world values, and that precise virtual camera rig.  Integration of live action with CG content is a much cleaner process. To preview edits and navigate through the content during post-production, Lytro provides a large suite of Light Field tools. By removing stitching from the post-production process, we are able to spend more time working on making the imagery better overall rather than just assembling lat-longs.  Bottomline, the Lytro Immerge system integrates well into post-production workflows while producing cinematic VR content with true presence.

In many ways our post-production process is more straightforward than other live action VR systems. No laborious stitching is required, plus the camera color relationship is constant throughout the 360° of capture. While pre-processing takes care of the majority of the color differences between camera views, post-production takes care of the remaining balancing that automated systems simply can’t catch. The ability to place CG objects using real world values speeds up integration, and gives you great control of isolating captured live action objects in a 3D world. The final result is distortion free live action content, with accurate real-world depth, perspective and incidence of light.

Because we capture and work in a Light Field volume, compositing is actually easier to deal with. The compositor determines how an element will be handled, and for similar elements that creative choice is then repeated in every individual view. While working with 475 camera views may seem daunting at first, most work translates between camera views easily. Simply set up a key for the master view using one of the camera views in the array, and those creative choices are propagated into the other camera views. There’s a freedom in not having to worry about all the elements in every view. Yet there’s flexibility to see the individual elements to ensure continuity. The content benefits from verifying all views, but the process is streamlined using our custom playback and previewing tools.

Previewing individual element views in the Lytro Immerge camera.

In most ways, our post-production methodologies are similar to established post-production practices but with subtle differences. For example, a process that may feel familiar to the post-production team is the integration of CG elements with captured video. A 3D artist simply imports the virtual Lytro Immerge camera rig into Maya, or Houdini. This virtual camera rig precisely renders CG elements as Light Field content, perfectly positioned in the virtual world and ready to comp back in. Registration is perfect, and the CG content behaves in headset just as convincingly as the captured live content, with full 6 DoF, perfect stereo, accurate parallax, and if lit right, view dependent light effects.

 

 

In contrast to traditional VFX practices, we don’t anti-alias our work in post-production. Typically, to achieve seamless blending between layered elements, production artists are accustomed to compositing elements with soft edges. However, in the Lytro Immerge system, we need post-production to deliver layered elements with sharp edges for clean depth. During playback in VR, the Lytro Immerge player smoothly anti-aliases those edges while rendering 6DoF views of the scene.

At Lytro, we are dedicated to transforming the world of cinematic VR with Lytro Immerge. Our Light Field post-production system and techniques for live-action VR capture are enabling creative freedom and providing new storytelling mechanisms. To ensure our customers’ success producing Light Field VR, our camera and playback system is designed with support for existing tools and workflows. VFX professionals with experience in Nuke and 3D tools like Maya or Houdini will be able to jump right into the post-production process.

 

About Orin Green 1 Article
VR Visual Effects / Post-Production Supervisor His production credits include Spawn, Lemony Snickett, Thor, Avengers, Captain America, and Ang Lee’s Hulk #1. His favorite superhero is Squirrel girl; after all, she defeated Dr. Doom.

12 Comments

  1. Awesome to see more info!

    > and do a second pass to capture whatever live action wedges the story requires

    So…. for 360° video, you would have to capture five separate sequences, and then loop them? Wouldn’t that lead to artifacting at the five stitch lines? Not gonna lie, I’m a little disappointed we’re not going to get true 360° light field video with this guy, but I guess I was overoptimistic about how much tech that would require! Super appreciative of all your hard work!

    • Hey Erik – To capture a full 360° light field video of a scene with our current planar camera we use the five wedge technique described in the blog, and the end product is a VR video with parallax, and correct stereoscopic in all directions, at any angle, with support for rotation and translation within the VR experience. It is most definitely full Light Field video and is unlike any you’ve likely seen to date. Not sure if you have plans for Tribeca and have a ticket or a pass that allows you to get into Tribeca Immersive, but Within is sharing their full Light Field VR piece “Hallelujah”, which was captured and produced using our system. Cinematic Light Field VR in headset is dramatic to experience firsthand.

      When lighting a scene for the Lytro Immerge camera, consistent, frame-controlled lighting is used from wedge to wedge. Action is recorded in wedges with identical lighting and composited into the scene in post. CG content can be composited anywhere in the video frame in post as well, even across wedges. Once post-production is complete all content is rendered into a Light Field volume.

      We eliminate the need to stitch by capturing a dense volume of individual element views, which are pre-processed to produce seamless 360 frames, with depth information, for any point of view in the Light Field volume. Our player delivers the left/right views of the scene based on the HMD’s position in the volume.

      A significant advantage to shooting a planar rig in wedges is that the process more closely follows current production methods, and allows for lighting, sound, art, script, and directorial teams to be behind the camera, able to interact with talent across multiple takes. Stories need to be written with the system in mind, which Hallelujah is a great example of.

      • I always thought the strongest promise of the Immerge rig was that you could record on-set without concern for rigs and crew as you could just delete them from the dataset. (That would be possible during a live broadcast as well) I guess this means we’re still bound by the archaic image based workflow, and it is still years until we see that happening.
        Is there just one Immerge rig in existence at this point?

        • The current Lytro Immerge camera is better suited for controlled production environments. Teams using the system have found that the planar configuration actually provides more control on-set and in post than other camera configurations. Captured elements can be removed during post, or composited in. By design, the system’s post-production workflow is similar to established industry standards. This enables current post, VFX and special effects teams to work with the content using tools they know, and techniques they’ve relied on.

  2. To fully capture the enviroment you need multiple 360 catpures. Then every missing or hide parts of the set will be there. We’ve done that when i was working for Autodesk in south France.
    We’ were the first team that fully grab a multi 360 camera environement. Cheers! 😇😎🇫🇷

  3. Thank you so much for the detailed account of how you grab all that data/content. I’m really interested in the parallax/6 dof, and what output formats and sizes you end up with.

  4. Quick question from this section….

    “For example, a process that may feel familiar to the post-production team is the integration of CG elements with captured video. A 3D artist simply imports the virtual Lytro Immerge camera rig into Maya, or Houdini. This virtual camera rig precisely renders CG elements as Light Field content, perfectly positioned in the virtual world and ready to comp back in. ”

    ….is this a hypothetical solution for CG integration, or are you developing your own lightfield based renderer as well? Currently no one has this on the market, thought I know Otoy is close to having it ready.

    Thanks,
    -andy

    • Hey Andy – yes, we have full Light Field support for CG integration in our Lytro Immerge system and in the VR experiences created with it. Light Field CG assets are generated using our virtual camera rig inside your preferred 3D rendering solution. Image quality can exceed what VR game engines can deliver, with full view dependent shading, in addition to 6DoF, parallax and stereo at any angle.

      • Steve – that’s amazing in it’s own right. Particularly that it’s render agnostic. Could I email you directly? I’m not sure the best way to get in touch. I have a few questions about collaborating with my students with Lytro, on VR.
        -a

Leave a Reply

Your email address will not be published.


*