On the Calibration of Focused Plenoptic Cameras

Plenoptic cameras provide a robust way to capture 3D information with a single shot. This is accomplished by encoding the direction of the incoming rays with a microlens array (MLA) in front of the camera sensor. In the focused plenoptic camera, a MLA acts like multiple small cameras that capture the virtual scene on the focus plane of a main lens from slightly different angles, which enables algorithmic depth reconstruction. This virtual depth is measured on the camera side, and independent of the main lens used. The connection between actual lateral distances and virtual depth, however, does depend on the main lens parameters, and needs to be carefully calibrated. In this paper, we propose an approach to calibrate focused plenoptic cameras, which allows a metric analysis of a given scene. To achieve this, we minimize an energy model based upon the thin lens equation. The model allows to estimate intrinsic and extrinsic parameters and corrects for radial lateral as well as radial depth distortion.

[1]  Jianhua Wang,et al.  A new calibration model of camera lens distortion , 2008, Pattern Recognit..

[2]  Shree K. Nayar,et al.  Shape from Focus , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Tom E. Bishop,et al.  The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Neill W Campbell,et al.  IEEE International Conference on Computer Vision and Pattern Recognition , 2008 .

[5]  M. J. Riedl Optical Design Fundamentals for Infrared Systems , 1995 .

[6]  Robert C. Bolles,et al.  Epipolar-plane image analysis: An approach to determining structure from motion , 1987, International Journal of Computer Vision.

[7]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[8]  Dean Brown,et al.  Decentering distortion of lenses , 1966 .

[9]  Vladimir Kolmogorov,et al.  Multi-camera Scene Reconstruction via Graph Cuts , 2002, ECCV.

[10]  J. P. Luke,et al.  Simultaneous estimation of super-resolved depth and all-in-focus images from a plenoptic camera , 2009, 2009 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video.

[11]  Richard Szeliski,et al.  Extracting layers and analyzing their specular properties using epipolar-plane-image analysis , 2005, Comput. Vis. Image Underst..

[12]  Sven Wanner,et al.  Globally consistent depth labeling of 4D light fields , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  G. Lippmann Epreuves reversibles donnant la sensation du relief , 1908 .

[14]  Andrew Lumsdaine,et al.  The focused plenoptic camera , 2009, 2009 IEEE International Conference on Computational Photography (ICCP).

[15]  Lennart Wietzke,et al.  Single lens 3D-camera with extended depth-of-field , 2012, Electronic Imaging.

[16]  Tomás Svoboda,et al.  A Convenient Multicamera Self-Calibration for Virtual Environments , 2005, Presence: Teleoperators & Virtual Environments.

[17]  Mads Nielsen,et al.  Computer Vision — ECCV 2002 , 2002, Lecture Notes in Computer Science.

[18]  Marc Levoy,et al.  Using plane + parallax for calibrating dense camera arrays , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[19]  Stefan B. Williams,et al.  Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[20]  Bernd Jähne,et al.  Generating EPI Representations of 4D Light Fields with a Single Lens Focused Plenoptic Camera , 2011, ISVC.