Exploiting Loops in the Camera Array for Automatic Focusing Depth Estimation

Abstract Autofocus is a fundamental and key problem for modern imaging sensor design. Although this problem has been well studied in single camera literature, unfortunately, little research has been done on large-scale camera arrays. Most of the existing synthetic aperture imaging systems still need to manually select the optimal focus plane when an object moves. Unlike the conventional autofocus method, which sweeps the focus plane to find the maximal contrast, we present a novel optimization framework to handle the above challenges. In particular, we formulate the camera array autofocus problem as a constrained optimization problem by minimizing the temporal and spatial correspondences error subject to global loop constraint. Then this problem is relaxed as a quadratic program and solved using sequential quadratic programming. The experimental results show that the proposed method achieves a better performance compared with the results of traditional methods. To the best of our knowledge, our proposed method is the first optimization framework for solving camera array autofocus problem and it is of great importance to improve the performance of the existing synthetic aperture imaging system.

[1]  Rui Yu,et al.  Continuously tracking and see-through occlusion based on a new hybrid synthetic aperture imaging model , 2011, CVPR 2011.

[2]  Marc Levoy,et al.  High performance imaging using large camera arrays , 2005, SIGGRAPH 2005.

[3]  Wojciech Matusik,et al.  Natural video matting using camera arrays , 2006, SIGGRAPH '06.

[4]  Hanno Scharr,et al.  Estimation of 3D Object Structure, Motion and Rotation Based on 4D Affine Optical Flow Using a Multi-camera Array , 2010, ECCV.

[5]  Wojciech Matusik,et al.  Natural video matting using camera arrays , 2006, SIGGRAPH 2006.

[6]  Jiri Matas,et al.  P-N learning: Bootstrapping binary classifiers by structural constraints , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[7]  Gérard G. Medioni,et al.  Context tracker: Exploring supporters and distracters in unconstrained environments , 2011, CVPR 2011.

[8]  Ramesh Raskar,et al.  Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing , 2007, SIGGRAPH 2007.

[9]  Cheng Lei,et al.  A new multiview spacetime-consistent depth recovery framework for free viewpoint video rendering , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[10]  Minh N. Do,et al.  Symmetric multi-view stereo reconstruction from planar camera arrays , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  David J. Kriegman,et al.  Synthetic Aperture Tracking: Tracking through Occlusions , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[12]  Marc Levoy,et al.  Using plane + parallax for calibrating dense camera arrays , 2004, CVPR 2004.

[13]  Marc Levoy,et al.  Reconstructing Occluded Surfaces Using Synthetic Apertures: Stereo, Focus and Robust Measures , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[14]  Chia-Kai Liang,et al.  Programmable aperture photography: multiplexed light field acquisition , 2008, SIGGRAPH 2008.

[15]  M. Levoy,et al.  Light field microscopy , 2006, SIGGRAPH 2006.

[16]  Marc Levoy,et al.  Synthetic Aperture Focusing using a Shear-Warp Factorization of the Viewing Transform , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops.

[17]  Xiuwei Zhang,et al.  A novel multi-object detection method in complex scene using synthetic aperture imaging , 2012, Pattern Recognit..

[18]  Leonard McMillan,et al.  A Real-Time Distributed Light Field Camera , 2002, Rendering Techniques.

[19]  Feng Li,et al.  Dynamic fluid surface acquisition using a camera array , 2011, 2011 International Conference on Computer Vision.