We live in an illuminated world; whether from natural or man made sources, light shines on our surroundings. Light’s directional rays hit the surface features, structures and objects around us. Even in a virtual world, a Light Field can be created as simulated rays of light from CG sources reflect off 3D objects in a virtual scene.
Of all the infinite rays in the Light Field, we can only see those that shine towards our eyes, and pass through our pupils. In response to these rays, our eyes produce neural signals from these rays, with information about their color, intensity and direction. Our brain uses these signals from light to perceive the objects in our world. As we move in the Light Field, different rays of light pass through our pupils and provide our brain with additional information that allows us to interpret the position of objects in space, along with their materials, including reflections, refractions and more.
To capture and reproduce a Light Field, both the color of the light rays and their path (direction/angularity) needs to be recorded. Determining the color and brightness of a ray of light is straightforward. Any pixel in a 2D image (live action or rendered), captured from within that Light Field, can provide color and brightness information about all of the rays of light that intersected that pixel. This is commonly referred to as a “bundle of light” (all the rays of light captured by a single pixel) but for simplicity in this post, we’ll continue using the term “ray of light” to describe the light captured by a single pixel.
Calculating the light rays’ angular and directional path can be accomplished using a variety of techniques, but all require at least two points to determine the actual direction/angle. One common method uses two planes for calculating a light ray’s path, with rays passing through and intersecting both planes. With those two intersection points the ray’s angularity and direction can be determined as it travels from one point to the other.
In this illustration light rays from an orange and an apple are traveling in the same direction, but at different angles. Both light rays intersect Plane A at point u, v and Plane B at point t, s. The v,u and t,s values are used to calculate each path’s angle and direction. Combined with the light rays’ color and brightness, a Light Field can be defined.
Disparity is another common technique for capturing a Light Field by recording objects in a scene using an array of two or more adjacent cameras. Disparity refers to differences between the recorded images. Those 2D images are comprised of colored pixels that correspond to the color and brightness of the light coming from objects in the scene. By triangulating notable features from the series of captured images and comparing the disparity of those pixels between images, various objects’ position in space and distance from the cameras can be calculated. This data is used to compute a Light Field from the 2D images. In the case of CG renderings of a 3D scene, objects’ distance from camera is often provided as part of the rendering process, though disparity calculation can also be used.
The apple and orange in the scene above are recorded using three adjacent cameras. Each individual camera records the scene from a different position, which produces disparity (differences) between the images. Each 2D image is a set of colored pixels representing only the color and brightness of the apple and orange’s surfaces in the scene. Disparity will have to be analyzed to determine angularity.
Using disparity between the three 2D images, the same red pixel on the apple is triangulated to calculate its position in space and distance from the three Light Field viewpoints. The light rays’ angularity and color information is then processed and recorded in the Light Field Volume.
In VR, volumetric Light Field content is capable of delivering cinematic levels of visual quality with immersion that enables the viewer to truly experience an alternate reality. To achieve that level of presence the Light Field experience needs to include visual cues such as perfect stereo in every direction, full parallax and six degrees of freedom within the volume, and correct light flow within the scene for view dependent effects such as specularity and refraction. Whether it represents live action or rendered CG, Light Field content can produce the most remarkable cinematic VR experiences.
To learn more about Light Field, capture and benefits follow our ongoing series:
What is a Light Field Volume
A Light Field volume is the subset of the entire Light Field which is sampled for playback in VR – The VR experience is bound by the Light Field Volume.
Primer on Types of 360° Video for VR
We overview the most commonly available types of 360 live action video for VR.
Understanding the Six Degrees of Freedom
Here’s a very clear set of animated examples to show each of the six degrees of freedom.
Reality Check: Computer Graphics in VR Today
The impact of computer graphics in VR today