The goal of Lytro is to bring light field and computational photography technologies to the world for everyone to enjoy. In realizing this goal, our engineers have encountered and resolved countless challenges that were once unimaginable in the lab environment.
In this blog post, we would like to share the results of one particular project conducted by Lytro: modeling the light field camera. In the process of building light field cameras and processing light field data, we found that our existing light field model could not accurately predict system performance and that image resolution usually exceeded limitations set by previous models.
Establishing an accurate model is crucial for Lytro as we need it for designing reliable light field cameras. Chia-Kai Liang, architect of the Lytro Computational Photography Group, and Prof. Ravi Ramamoorthi of UCSD, distinguished researcher in computer graphics, teamed up together to build a more accurate model. What we found was that the full spatial and angular photo sensor sensitivity profile must be considered in accurately modeling the light field camera. This is particularly important when a light ray’s contribution to the light field data depends on its hitting a specific location and angle on the sensor surface. All of these variations play important roles in determining the performance of the light field camera. While this idea that the performance of the conventional 2D cameras depends on the spatial sensor fill-rate and that a 4D light field camera should depend on both the spatial and angular sensor sensitivities is somewhat intuitive in hindsight, no previous model had ever taken this vital information into account.
We have since built a new simulation system, based on this new observation, that much more accurately predicts the results of our physical camera. Furthermore, the model also enables us to virtually test and evaluate different light field camera designs without the requirement of physical prototypes or data collection.
We believe this model will be useful not only to us, but to everybody exploring the world of computational photography and would like to share our findings. Our paper describing the details of this work has been accepted by ACM Transaction on Graphics, the highest-tier of scientific journals in computer graphics. We hope it can help researchers to re-think how light field cameras are modeled and inspire even more interesting work in computational photography.
The author’s version of the paper is available at
Supplemental Material: http://cseweb.ucsd.edu/~ravir/lfres-supp.pdf