6DoF in Live Action VR

In Part 1 of this six degrees of freedom (6DoF) series we illustrated each of the degrees of freedom so they can be recognized in a VR headset. In this article we’ll describe the prevalent live action 360 video types, and provide a breakdown listing how many degrees of freedom are viewable in each, as well as noting relevant considerations for each video type.

In terms of the current VR hardware (as of 04/2017), full 6DoF is supported in head mounted displays (HMDs) with both rotational and positional tracking (e.g. Oculus Rift, HTC Vive™, PlayStation®  VR). Mobile devices are becoming more powerful, and it’s predicted to be a matter of time before these also support more than three degrees of freedom (3DoF). There are some great 6DoF 360 examples enabled by 3D game engines like Unreal and Unity, but to date examples of live action 360 video content with full 6DoF are extremely rare.

Therein lies the problem.

Creating a remarkable cinematic VR experience is not just a question of capturing a beautiful 360 scene or adding stunning graphics; to be remarkable the experience has to have presence. When the viewer forgets where they are and truly feels like they’re in an alternate reality, we’ve acheived presence. To deliver presence in live action VR, we rely on high immersion, which requires full 6DoF, with parallax and perfect stereo in every direction, as well as the correct flow of light in the scene. Viewers are often unaware of how their brain interprets all these visual signals, but when something is wrong or missing, immersion breaks and the viewer is often unable to describe why.

Examples of Visual Cues for High Immersion:

  • Parallax – The most important visual cue for position in space is the perception of distance. Objects that are close move faster than objects further away. Our brain is wired to notice these tiny differences and associate distance with relative motion speed. Try shifting your head from side to side (position, not rotation) and notice how objects around you appear to move at different speed. This very noticable difference in motion between objects near and far is parallax. In VR, when objects don’t have parallax, we interpret the scene as flat even if captured in stereo. Without parallax, the most important visual cue for immersion, presence is broken.
  • Reflections – Many objects in real life have surfaces that are reflective. They can be fully reflective like a mirror, or be less reflective like water, wet surfaces, polished metal, shiny plastic, etc. As we shift our viewpoints of a reflective surface, both the details in the reflections and the location of those reflections on the object’s surface will move as we move. The rate of motion of those reflections and details in them behave like any other object in space – with parallax. If those reflective details don’t correspond to our shifting viewpoints with parallax, we interpret those reflections as artificial, like a lifeless reflection painted on top of the object. Reflections are a critical visual cue for immersion; if they appear artificial in a VR scene, presence is broken.
  • Specular highlights – When light bounces off a highly reflective curved surface the bright spot of light that appears on object is referred to as a specular highlight. As with reflective surfaces, when we move relative to a specular highlight, that highlight will change both in brightness and position as they reflect light from different angles. If the specular highlight doesn’t change to correspond to our movement, we interpret that highlight to be artificial, as if it were painted on the object. Specular highlights are another critical visual cue and like the other cues above, if they appear broken in VR, presence is broken.

In VR, parallax, reflections and specular highlights which correspond authentically to viewpoint changes are fundamental to producing immersion and achieving presence. The only type of live action video in VR with parallax and support for view dependant reflections and specular highlights is Light Field VR.

Types of Live Action (Cinematic) 360 Video

360° Mono

A stitched panoramic video captured from multiple cameras can deliver a scene in VR, but it only supports the first three degrees of freedom in our list. 360° mono has no stereo, no sense of depth, no parallax; the same video is delivered for each eye. Stitching artifacts between camera views are often visible. 360° mono video does not support any immersive visual cues like parallax, reflections or specular highlights.

360° Stereo

With multiple pairs of left/right cameras, 360°stereo video can be captured and stitched into panoramic video for VR. Stereo 360° provides 3DoF with functional stereo when your point of view aligns horizontally with the camera pairs used to capture the original content, but the stereo effect breaks between those original L/R camera views, and when you look up/down or when you roll your head off the horizon line. There is a sense of depth when the stereo effect is functioning, however there are no immersive visual cues like parallax, reflections or specular highlights. As with 360° mono, stitching artifacts are often visible, and are much harder to eliminate.

360° Omnistereo

360° Omnistereo is a computational panoramic video type, produced by stitching together a series of vertical image slices (strips) from multiple camera views to produce a pair of left eye and right eye views for any angle along a panorama. Artifacts are negligible based on the width of the strips, narrower strips produce smoother omnistereo video. Stereo fidelity is related to the number of vertical strips used. The stereo effect is good when viewed along the 360° horizon. However the effect diminishes when the viewer looks up/down and breaks when they roll their head. Omnistereo is limited to a 3DoF experience, it produces a sense of depth when the stereo effect is functioning, however there are no immersive visual cues is like parallax, reflections or specular highlights. 360° Omnistereo content is challenging to produce as it requires significant knowhow and, to date, there are no accessible production systems for creating it.

360° Light Field VR

360° Light Field VR delivers live action video for VR, captured by an array of cameras, and processed into a viewing volume with numerous 360° viewpoints throughout that volume. Perfect stereoscopic viewpoints with parallax are delivered to each eye, in every direction, at any angle of rotation, based on the HMD’s position relative to the volume. Light Field VR is capable of delivering the full set of immersive visual cues such as parallax and view dependent shading effects including interactive specular highlights and reflections. There are no stitching artifacts, and intra-pupillary distance (IPD) can be modified during playback. The spatial immersion and presence found in Light Field VR is unrivaled in other 360 video formats due to the combination 6DoF with perfect stereo and depth in every direction, full parallax, and view dependent shading effects. Processing requirements are significant, however the Lytro Immerge workflow integrates well into post-production and VFX industry standard workflows.

About Steve Cooper 5 Articles
Director Product Marketing at Lytro, cyclist and owns a flame thrower

Be the first to comment

Leave a Reply

Your email address will not be published.


*