In previous blogs we’ve described what a Light Field is, how it is captured, what a Light Field volume is, and we’ve highlighted the benefits. To review, a Light Field is a collection of light rays, with their color, brightness and critically, the angular path of those light rays. In this article we’ll continue digging in and describe how we can produce benefits from a captured Light Field.
To get started, recall that after a Lytro ILLUM (a plenoptic capture device described in this article) captures a Light Field we process the light ray data with attributes of that can be changed after capture such as focus and aperture. We can recalculate these because the Light Field we recorded contains the angular path of the rays.
Leveraging the light ray path information we know where those rays would have been even if the lens settings had been different; focus and aperture for example or artistic lens distortions, or even the angle of the sensor for a virtual tilt/shift mechanism. By recomputing these light ray paths we can simulate how the rays flow through the lens, and then focus on a virtual imaging plane to produce a variety of images and effects. It’s essentially a virtual camera, which can change viewpoints in an image, with a refocusable virtual lens and an interactive aperture that can be adjusted even after the shutter has snapped and that image has been captured.
To focus a subject in any camera, the optical elements in the lens move closer or farther from the imaging plane to bend and converge light rays until they form a sharp image. In a traditional camera, when the shutter is opened and the image is captured, those focused light rays are recorded exactly as the lens projected them onto the imaging plane, as bits of color and brightness. To produce a picture, these values are then converted into a static image format that represents that exact ray composition.
When using a Lytro plenoptic camera to capture a Light Field, the angular path (direction) and color information of bundles of light rays are recorded from multiple points of view. This is achieved by placing a high-density imaging sensor under a very fine microlens array comprised of many thousands of microscopic lenses, all precisely positioned on top of even finer sensor pixels. Using the pixel information under each microlens, the plenoptic camera is able to determine the direction of each bundle of light, with each single pixel on the sensor capturing a single bundle or ray of light. With that directional information, we can mathematically determine where those light ray bundles originated from and where they are converging. By calculating the position of the virtual imaging plane to match the desired plane of convergence and focus, the plenoptic camera creates a range of refocusability that can be applied after the image has already been captured. With Light Field data and the right computation, we can produce sharpness where there was blur, which benefits both focal point and depth of field.
From Light Field data we can also virtually recreate various aperture settings. The aperture is an adjustable circular opening formed by a mechanical ring of multiple blades, which allows more light to pass through the lens when open wide, and less light as it closes. Changing the aperture affects exposure as well as depth of field, one of the fundamental relationships in photography since the invention of the camera.
Aperture values (known as an f-stop, or f) on a lens refer the ratio of that lens’ length divided by the diameter of the opening. For example, a 100mm long lens with a 50mm wide aperture opening would be referred to as having an f-stop value of f/2.0 (100 / 50 = 2); and f/16 value means that the aperture is 6.25mm in diameter (6.25 x 16 = 100). Wider open apertures let more light in and creates a shallower depth of field (small area of focus); smaller apertures let less light in and create a deeper depth of field (larger area of focus).
As shown in the illustration above, aperture diameter has an impact on the range of focus (depth of field: DOF). Experienced photographers know that a wide-open aperture like f/2.0 will produce images or video with a very shallow DOF, meaning that the focus range is very narrow or tight; areas in front or behind that focus point transition to blur very quickly. This effect is often used creatively to isolate the subject from its foreground.
Conversely, a small or tight aperture such as f/16 will capture images with very deep or wide range of focus from the main subject to far behind and in front of it. In images that need preservation of detail, from near to far, a tight aperture is very useful to assure focus, but small apertures require lots of light and they provide limited creative freedom.
Shallow Depth of Field (DOF) is very good for helping to isolate the subject
from its foreground objects and background. Full DOF produces a wider range of
focus including the subject, extending to foreground and background elements in
By design, Lytro plenoptic cameras capture the Light Field at a fully open aperture. Based on what we described above, you’d assume that every image out of our cameras would have shallow DOF, but because we capture and recalculate the light ray path data, we can virtually recreate a full range of aperture diameters from very large to very small. This provides the ability to creatively control the aperture and DOF in ways traditional cameras cannot. The most obvious benefit is both the ability to adjust DOF after the image has been taken, as well as the ability to simulate very small f numbers via estimated depth (below f/1.0) to produce extraordinarily shallow DOF beyond the mechanical limits of the physical camera; this is an extremely powerful creative tool enabled by Light Field capture.
For static imagery, Light Field data creates dramatic Living Pictures. But when used for cinematic video, Light Field data provides unprecedented creative control over the focus plane, focus ranges and DOF, allowing them to be virtually modified and animated throughout a take even after it has been recorded. Combined with the varied points of view across the plenoptic array that a Light Field capture provides, even the framing of a scene can be shifted along the imaging plane’s horizontal and vertical axes after capture. With video, these effects can be carefully controlled and combined over time, providing whole new creative dimensions in storytelling artistry.