Depth From Light Fields Analyzing 4D Local Structure

In this paper, we develop a local method to obtain depths from the 4D light field. In contrast to previous local depth from light field methods based on EPIs, e.g., 2D slices of the light field, the proposed method takes into account the 4D nature of the light field and uses its four dimensions. Furthermore, our technique adapts well to parallel hardware. The performance of the method is tested against a publicly available benchmark dataset and compared with other algorithms that previously have been tested with the same benchmark. Results show that the proposed method can achieve competitive results in reasonable time.

[1]  Sven Wanner,et al.  Datasets and Benchmarks for Densely Sampled 4D Light Fields , 2013, VMV.

[2]  Edward H. Adelson,et al.  Single Lens Stereo with a Plenoptic Camera , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Kiriakos N. Kutulakos,et al.  Confocal Stereo , 2006, International Journal of Computer Vision.

[4]  J. P. Luke,et al.  Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera , 2009, 2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video.

[5]  Todor Georgiev,et al.  Light-Field Capture by Multiplexing in the Frequency Domain , 2007 .

[6]  Sven Wanner,et al.  Globally consistent depth labeling of 4D light fields , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[7]  Gordon Wetzstein,et al.  Compressive light field photography , 2012, SIGGRAPH Posters.

[8]  Takeo Kanade,et al.  When Is the Shape of a Scene Unique Given Its Light-Field: A Fundamental Theorem of 3D Vision? , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Sebastian Schwarz,et al.  Depth Sensing for 3DTV: A Survey , 2013, IEEE MultiMedia.

[10]  Stefano Soatto,et al.  3-D Shape Estimation and Image Restoration - Exploiting Defocus and Motion Blur , 2006 .

[11]  Ramesh Raskar,et al.  Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing , 2007, SIGGRAPH 2007.

[12]  Andrew Gardner,et al.  Capturing and Rendering with Incident Light Fields , 2003, Rendering Techniques.

[13]  F. Rosa,et al.  Near Real-Time Estimation of Super-resolved Depth and All-In-Focus Images from a Plenoptic Camera Using Graphics Processing Units , 2010, Int. J. Digit. Multim. Broadcast..

[14]  Tom Goldstein,et al.  The Split Bregman Method for L1-Regularized Problems , 2009, SIAM J. Imaging Sci..

[15]  L. Onural,et al.  A comparative study of light field representation and integral imaging , 2010 .

[16]  Andrew Lumsdaine,et al.  The multifocus plenoptic camera , 2011, Electronic Imaging.

[17]  J. Bigun,et al.  Optimal Orientation Detection of Linear Symmetry , 1987, ICCV 1987.

[18]  Mark A. Horowitz,et al.  Light field video camera , 2000, IS&T/SPIE Electronic Imaging.

[19]  Silvano Di Zenzo,et al.  A note on the gradient of a multi-image , 1986, Comput. Vis. Graph. Image Process..

[20]  G. Lippmann Epreuves reversibles donnant la sensation du relief , 1908 .

[21]  Andrew Lumsdaine,et al.  The Multi-Focus Plenoptic Camera , 2011 .

[22]  Fernando P. Nava,et al.  Super-Resolution in Plenoptic Cameras by the Integration of Depth from Focus and Stereo , 2010, ICCCN.

[23]  J. P. Luke,et al.  Error analysis of depth estimations based on orientation detection in EPI-representations of 4D light fields , 2013, 2013 12th Workshop on Information Optics (WIO).

[24]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[25]  P. Hanrahan,et al.  Light Field Photography with a Hand-held Plenoptic Camera , 2005 .

[26]  Yael Pritch,et al.  Scene reconstruction from high spatio-angular resolution light fields , 2013, ACM Trans. Graph..

[27]  Philipp Slusallek,et al.  Polyhedral Geometry and the Two-plane Parameterization , 2009 .

[28]  Tony F. Chan,et al.  Aspects of Total Variation Regularized L[sup 1] Function Approximation , 2005, SIAM J. Appl. Math..

[29]  Andrew Lumsdaine,et al.  The focused plenoptic camera , 2009, 2009 IEEE International Conference on Computational Photography (ICCP).

[30]  Gordon Wetzstein,et al.  Computational Plenoptic Imaging , 2011, SIGGRAPH '12.

[31]  Horst Bischof,et al.  A Duality Based Approach for Realtime TV-L1 Optical Flow , 2007, DAGM-Symposium.

[32]  Tom E. Bishop,et al.  Full-Resolution Depth Map Estimation from an Aliased Plenoptic Light Field , 2010, ACCV.

[33]  Eero P. Simoncelli,et al.  Differentiation of discrete multidimensional signals , 2004, IEEE Transactions on Image Processing.

[34]  Chintan Intwala,et al.  Light Field Camera Design for Integral View Photography , 2006 .

[35]  Andrew Lumsdaine,et al.  Reducing Plenoptic Camera Artifacts , 2010, Comput. Graph. Forum.

[36]  Aggelos K. Katsaggelos,et al.  Compressive Light Field Sensing , 2012, IEEE Transactions on Image Processing.

[37]  Thomas Pock,et al.  Variational Shape from Light Field , 2013, EMMCVPR.

[38]  J. Berent,et al.  Plenoptic Manifolds , 2007, IEEE Signal Processing Magazine.

[39]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[40]  Donald G. Dansereau 4D light field processing and its application to computer vision , 2003 .

[41]  J. Tanida,et al.  Thin Observation Module by Bound Optics (TOMBO): Concept and Experimental Verification. , 2001, Applied optics.