Producing cinematic VR presents numerous challenges; as an example, all current cameras and production systems available today rely heavily on stitching to merge cameras, which introduces seams, warping and distortion. The challenges grow further when trying to produce stereoscopic or omni-stereo 360 video. Typical stereoscopic cameras and systems for VR produce left/right stereo imagery in limited sweet spots, on the horizon only; as you rotate your head left to right, you perceive stereo only in those sweet spots. But if you tilt your head, or look up or down, the stereo effect breaks.
With Lytro Immerge, we’re doing cinematic capture and post-production in a unique way rooted in Light Field technology. Our system produces live action with a full six degrees of freedom (6DoF), true stereo perspective and accurate parallax in any direction, including view dependent highlights and reflections. VR content delivered via 3D game engines has enormous presence because the game engine provides true stereo in every direction yet lacks the ability to accurately depict real world light behavior such as subsurface scattering. Lytro Immerge delivers this level of presence but with live action video and all the natural light characteristics. Additionally our live action VR can be seamlessly integrated with film-quality CG elements, producing unrivaled VR experiences with high immersion and high realism.
The Lytro Immerge camera captures an entire 360° scene via five 72° rotations.
Our Lytro Immerge camera is a 95 element planar Light Field array with a 90° field of view. We capture the environment in five separate wedges. To capture a full 360° scene we rotate the camera five times at 72° per wedge. This produces 475 individual camera views per frame. We typically capture a static clean plate for the background and do a second pass to capture whatever live action wedges the story requires. The captured imagery is pre-processed to create RGB images, depth-maps and a virtual camera rig. These assets are then handed over to the post-production team.
Due to the number of individual camera views we capture, our process requires more rendering and processing than a typical cinematic VR project, however there are important advantages to working with Lytro Immerge. The Lytro Immerge original source material has even, balanced color, the depth channel with real world values, and that precise virtual camera rig. Integration of live action with CG content is a much cleaner process. To preview edits and navigate through the content during post-production, Lytro provides a large suite of Light Field tools. By removing stitching from the post-production process, we are able to spend more time working on making the imagery better overall rather than just assembling lat-longs. Bottomline, the Lytro Immerge system integrates well into post-production workflows while producing cinematic VR content with true presence.
In many ways our post-production process is more straightforward than other live action VR systems. No laborious stitching is required, plus the camera color relationship is constant throughout the 360° of capture. While pre-processing takes care of the majority of the color differences between camera views, post-production takes care of the remaining balancing that automated systems simply can’t catch. The ability to place CG objects using real world values speeds up integration, and gives you great control of isolating captured live action objects in a 3D world. The final result is distortion free live action content, with accurate real-world depth, perspective and incidence of light.
Because we capture and work in a Light Field volume, compositing is actually easier to deal with. The compositor determines how an element will be handled, and for similar elements that creative choice is then repeated in every individual view. While working with 475 camera views may seem daunting at first, most work translates between camera views easily. Simply set up a key for the master view using one of the camera views in the array, and those creative choices are propagated into the other camera views. There’s a freedom in not having to worry about all the elements in every view. Yet there’s flexibility to see the individual elements to ensure continuity. The content benefits from verifying all views, but the process is streamlined using our custom playback and previewing tools.
Previewing individual element views in the Lytro Immerge camera.
In most ways, our post-production methodologies are similar to established post-production practices but with subtle differences. For example, a process that may feel familiar to the post-production team is the integration of CG elements with captured video. A 3D artist simply imports the virtual Lytro Immerge camera rig into Maya, or Houdini. This virtual camera rig precisely renders CG elements as Light Field content, perfectly positioned in the virtual world and ready to comp back in. Registration is perfect, and the CG content behaves in headset just as convincingly as the captured live content, with full 6 DoF, perfect stereo, accurate parallax, and if lit right, view dependent light effects.
In contrast to traditional VFX practices, we don’t anti-alias our work in post-production. Typically, to achieve seamless blending between layered elements, production artists are accustomed to compositing elements with soft edges. However, in the Lytro Immerge system, we need post-production to deliver layered elements with sharp edges for clean depth. During playback in VR, the Lytro Immerge player smoothly anti-aliases those edges while rendering 6DoF views of the scene.
At Lytro, we are dedicated to transforming the world of cinematic VR with Lytro Immerge. Our Light Field post-production system and techniques for live-action VR capture are enabling creative freedom and providing new storytelling mechanisms. To ensure our customers’ success producing Light Field VR, our camera and playback system is designed with support for existing tools and workflows. VFX professionals with experience in Nuke and 3D tools like Maya or Houdini will be able to jump right into the post-production process.