A few weeks ago, we introduced Moon, the first ever live action 6DoF VR experience. At the time, we promised that we’d talk more in the near future about the production of Moon and the Light Field technology that powers it. Well, the near future is now!
On Set Production of Moon
We filmed Moon on the historic Mack Sennett soundstage in LA with the current version of the Lytro Immerge Light Field VR camera rig. This rig is a planar configuration that we rotate and film in five “wedges.” In the case of Moon, we captured three “action wedges” with real actors and two “static wedges” with the rest of the set. Filming in wedges works exceptionally well for on set productions because the process mimics that of traditional film production, with the director sitting behind the camera while working with talent and the cinematographer, and the crew on set, out of the camera’s view.
Even though wedge-based shooting is great for on set production, there are some tricky considerations we needed to develop solutions for. While the Mack Sennett stage is a crown jewel of cinema history here in the US, the floor’s surface was uneven, requiring stabilization of the dolly and precise leveling of the camera for rotation. Because we were mixing footage from multiple days, we used on set markers and lasers that allowed us to accurately realign the camera day to day, wedge to wedge. As with any 360 production, a multi-day shoot means the set was “hot” for the duration.
The story – a cheeky take on the classic conspiracy theory that the moon landing was faked – required a lighting transition at the cue “CUT!” from the actor playing the director; coordinating this precise timing and duration of the transition across wedges required frame level lighting controls. Additionally, Moon relied on perfectly timed performances in three of the five wedges. We replayed audio cues from the astronaut’s best take to drive the other two action wedge performances. The actors couldn’t move from wedge to wedge; accordingly their paths on stage and marks were carefully directed so they would remain in view of the camera.
Filming with the Lytro Immerge system presents some unique idiosyncrasies on set. A Light Field camera captures a “viewing volume” – a space the viewer can move around in and get a full 6DoF experience. The size of this volume is directly proportional to the size of the Light Field camera, so the rig itself is about three feet across and three feet high. Fortunately, the camera (and its rotating head for wedge-based shooting) are mounted to a Chapman dolly – a standard for on set production. This makes it pretty easy to move the camera into position – as long as we pay close attention to the rack of servers tethered to the rig to capture the data streaming to storage in real-time. We position the rack of servers directly behind the camera, and when we rotate the camera to the next wedge, the servers glide around behind the camera, so the entire system only occupies about a six feet by four feet area at any time.
Of course, Lytro Immerge isn’t just a giant wedge-based shooting system – it’s a Light Field solution. That makes for some funny moments on set, like watching people “frame” a shot by squatting at seated height in front of the camera and moving their heads back and forth to get the same sense of perspective they’ll see in the headset – at Lytro, we’ve come to call this “meerkating.”
About the Technology: Merging Planar Light Field Volumes
I like to tell my team that “rigs are cheap” – perhaps ironic given the capital expense of shooting with many super high-end cameras and lenses. But, what I mean to emphasize with that witticism is the fundamental configurability of Lytro Immerge. Our Light Field processing software is agnostic to the configuration of the cameras, so we design camera rigs for the workflows and use-cases we see most. And, our camera hardware and software setup is completely modular, so creating a new rig topology really just boils down to mechanical engineering – no new software or electronics design required.
Lytro Immerge’s planar configuration we used to shoot Moon is just one example of that ethos. Each wedge creates a planar Light Field volume in which the user has 6DoF mobility, but where there is imagery in only one direction. To create the final experience, we automatically merge these five planar Light Fields into a single spherical Light Field volume in which the viewer has 6DoF mobility with imagery all around. This is quite different from “stitching” … because we’ve reconstructed the geometry of the scene, it’s more akin to automatically lining up five highly detailed 3D models. As a result, we don’t need to do any manual work, and the content has no stitching artifacts. In the end, we used the data from more than 300 cameras to capture the entire Light Field volume for Moon.
6DoF Volume Capture Inside Nuke
View more behind-the-scene photos on Facebook.