A Light Transport Framework for Lenslet Light Field Cameras

Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental trade-off between spatial and angular resolution, but there has been limited understanding of this trade-off theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera because a new design is usually reported with its prototype and rendering algorithm, both of which affect resolution. In this article, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the prefiltering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have nonuniform angular sensitivity, responding more to light along the optical axis rather than at grazing angles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the performance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.

[1]  F. Durand,et al.  Temporal light field reconstruction for rendering distribution effects , 2011, ACM Trans. Graph..

[2]  Edward H. Adelson,et al.  Single Lens Stereo with a Plenoptic Camera , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Sven Wanner,et al.  Globally consistent depth labeling of 4D light fields , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Charles R. Johnson,et al.  Matrix analysis , 1985, Statistical Inference for Engineers and Data Scientists.

[5]  Kiriakos N. Kutulakos,et al.  A theory of inverse light transport , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[6]  Frédo Durand,et al.  Antialiasing for automultiscopic 3D displays , 2006, EGSR '06.

[7]  M. Fiddy,et al.  Signal recovery and synthesis , 1998 .

[8]  Leonard McMillan,et al.  Dynamically reparameterized light fields , 2000, SIGGRAPH.

[9]  Takeo Kanade,et al.  Limits on super-resolution and how to break them , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[10]  Ren Ng Fourier slice photography , 2005, ACM Trans. Graph..

[11]  P. Hanrahan,et al.  Digital light field photography , 2006 .

[12]  Kathrin Berkner,et al.  Image formation analysis and high resolution image reconstruction for plenoptic imaging systems. , 2013, Applied optics.

[13]  Jitendra Malik,et al.  Depth from Combining Defocus and Correspondence Using Light-Field Cameras , 2013, 2013 IEEE International Conference on Computer Vision.

[14]  Frédo Durand,et al.  A frequency analysis of light transport , 2005, SIGGRAPH '05.

[15]  M. Levoy,et al.  Wigner distributions and how they relate to the light field , 2009, 2009 IEEE International Conference on Computational Photography (ICCP).

[16]  Frédo Durand,et al.  Axis-aligned filtering for interactive physically-based diffuse indirect lighting , 2013, ACM Trans. Graph..

[17]  Frédo Durand,et al.  Frequency analysis and sheared filtering for shadow light fields of complex occluders , 2011, TOGS.

[18]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[19]  Edmund Y. Lam,et al.  Super-resolution reconstruction in a computational compound-eye imaging system , 2007, Multidimens. Syst. Signal Process..

[20]  Gordon Wetzstein,et al.  Compressive light field photography using overcomplete dictionaries and optimized projections , 2013, ACM Trans. Graph..

[21]  P. Hanrahan,et al.  Light Field Photography with a Hand-held Plenoptic Camera , 2005 .

[22]  Lennart Wietzke,et al.  Single lens 3D-camera with extended depth-of-field , 2012, Electronic Imaging.

[23]  Frédo Durand,et al.  4D frequency analysis of computational cameras for depth of field extension , 2009, SIGGRAPH '09.

[24]  Yves D. Willems,et al.  Bi-directional path tracing , 1993 .

[25]  Zhan Yu,et al.  An analysis of color demosaicing in plenoptic cameras , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[26]  Jun Tanida,et al.  Reconstruction of a high-resolution image on a compound-eye image-capturing system. , 2004, Applied optics.

[27]  Stefan B. Williams,et al.  Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter , 2013, Electronic Imaging.

[28]  A. El Gamal,et al.  CMOS image sensors , 2005, IEEE Circuits and Devices Magazine.

[29]  RamamoorthiRavi,et al.  A Light Transport Framework for Lenslet Light Field Cameras , 2015 .

[30]  Ramesh Raskar,et al.  Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing , 2007, ACM Trans. Graph..

[31]  Douglas Lanman,et al.  Shield fields: modeling and capturing 3D occluders , 2008, ACM Trans. Graph..

[32]  E. Adelson,et al.  The Plenoptic Function and the Elements of Early Vision , 1991 .

[33]  John Hart,et al.  ACM Transactions on Graphics , 2004, SIGGRAPH 2004.

[34]  Frédo Durand,et al.  5D Covariance tracing for efficient defocus and motion blur , 2013, TOGS.

[35]  Gordon Wetzstein,et al.  Tensor displays , 2012, ACM Trans. Graph..

[36]  Chia-Kai Liang,et al.  Programmable aperture photography: multiplexed light field acquisition , 2008, SIGGRAPH 2008.

[37]  Aaron S. Andalman,et al.  Wave optics theory and 3-D deconvolution for the light field microscope. , 2013, Optics express.

[38]  Harry Shum,et al.  Plenoptic sampling , 2000, SIGGRAPH.

[39]  Peter D. Burns,et al.  Slanted-Edge MTF for Digital Camera and Scanner Analysis , 2000, PICS.

[40]  Andrew Lumsdaine,et al.  Superresolution with the focused plenoptic camera , 2011, Electronic Imaging.

[41]  Yael Pritch,et al.  Scene reconstruction from high spatio-angular resolution light fields , 2013, ACM Trans. Graph..

[42]  Frédo Durand,et al.  Frequency analysis and sheared reconstruction for rendering motion blur , 2009, ACM Trans. Graph..

[43]  Shree K. Nayar,et al.  PiCam , 2013 .

[44]  Brian A Wandell,et al.  Optical efficiency of image sensor pixels. , 2002, Journal of the Optical Society of America. A, Optics, image science, and vision.

[45]  Gordon Wetzstein,et al.  Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays , 2011, SIGGRAPH 2011.

[46]  Ramesh Raskar,et al.  Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing , 2007, SIGGRAPH 2007.

[47]  Frédo Durand,et al.  Image and depth from a conventional camera with a coded aperture , 2007, ACM Trans. Graph..

[48]  Pat Hanrahan,et al.  A signal-processing framework for inverse rendering , 2001, SIGGRAPH.

[49]  Andrew Lumsdaine,et al.  Spatial analysis of discrete plenoptic sampling , 2011, Electronic Imaging.

[50]  Frédo Durand,et al.  Linear view synthesis using a dimensionality gap light field prior , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[51]  J. P. Luke,et al.  Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera , 2009, 2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video.

[52]  Andrew Lumsdaine,et al.  The focused plenoptic camera , 2009, 2009 IEEE International Conference on Computational Photography (ICCP).

[53]  Gordon Wetzstein,et al.  On Plenoptic Multiplexing and Reconstruction , 2012, International Journal of Computer Vision.

[54]  Homer H. Chen,et al.  Light Field Analysis for Modeling Image Formation , 2011, IEEE Transactions on Image Processing.

[55]  G. Lippmann Epreuves reversibles donnant la sensation du relief , 1908 .

[56]  Leif Kobbelt,et al.  Theory, analysis and applications of 2D global illumination , 2012, TOGS.

[57]  Leonard T. Bruton,et al.  A 4-D Dual-Fan Filter Bank for Depth Filtering in Light Fields , 2007, IEEE Transactions on Signal Processing.

[58]  Glenn Healey,et al.  Radiometric CCD camera calibration and noise estimation , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[59]  Sven Wanner,et al.  Spatial and Angular Variational Super-Resolution of 4D Light Fields , 2012, ECCV.

[60]  Gordon Wetzstein,et al.  Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays , 2011, ACM Trans. Graph..

[61]  A. Gerrard,et al.  Introduction to Matrix Methods in Optics , 1975 .

[62]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[63]  Tom E. Bishop,et al.  The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[64]  Andrew Lumsdaine,et al.  Superresolution with Plenoptic 2.0 Cameras , 2009 .

[65]  Zhouchen Lin,et al.  Penrose Pixels for Super-Resolution , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[66]  M. Glas,et al.  Principles of Computerized Tomographic Imaging , 2000 .