A geometric approach to shape from defocus

We introduce a novel approach to shape from defocus, i.e., the problem of inferring the three-dimensional (3D) geometry of a scene from a collection of defocused images. Typically, in shape from defocus, the task of extracting geometry also requires deblurring the given images. A common approach to bypass this task relies on approximating the scene locally by a plane parallel to the image (the so-called equifocal assumption). We show that this approximation is indeed not necessary, as one can estimate 3D geometry while avoiding deblurring without strong assumptions on the scene. Solving the problem of shape from defocus requires modeling how light interacts with the optics before reaching the imaging surface. This interaction is described by the so-called point spread function (PSF). When the form of the PSF is known, we propose an optimal method to infer 3D geometry from defocused images that involves computing orthogonal operators which are regularized via functional singular value decomposition. When the form of the PSF is unknown, we propose a simple and efficient method that first learns a set of projection operators from blurred images and then uses these operators to estimate the 3D geometry of the scene from novel blurred images. Our experiments on both real and synthetic images show that the performance of the algorithm is relatively insensitive to the form of the PSF Our general approach is to minimize the Euclidean norm of the difference between the estimated images and the observed images. The method is geometric in that we reduce the minimization to performing projections onto linear subspaces, by using inner product structures on both infinite and finite-dimensional Hilbert spaces. Both proposed algorithms involve only simple matrix-vector multiplications which can be implemented in real-time.

[1]  Stefano Soatto,et al.  Observing Shape from Defocused Images , 2004, International Journal of Computer Vision.

[2]  Stefano Soatto,et al.  Seeing beyond occlusions (and other marvels of a finite lens aperture) , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[3]  Shree K. Nayar,et al.  Minimal operator set for passive depth from defocus , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[4]  Murali Subbarao,et al.  Depth from defocus: A spatial domain approach , 1994, International Journal of Computer Vision.

[5]  Gene H. Golub,et al.  The differentiation of pseudo-inverses and non-linear least squares problems whose variables separate , 1972, Milestones in Matrix Computation.

[6]  Djemel Ziou,et al.  Depth from Defocus Estimation in Spatial Domain , 2001, Comput. Vis. Image Underst..

[7]  C. A. Burbeck,et al.  Occlusion edge blur: a cue to relative visual depth. , 1996, Journal of the Optical Society of America. A, Optics, image science, and vision.

[8]  Muralidhara Subbarao,et al.  Depth recovery from blurred edges , 1988, Proceedings CVPR '88: The Computer Society Conference on Computer Vision and Pattern Recognition.

[9]  Shree K. Nayar,et al.  Rational Filters for Passive Depth from Defocus , 1998, International Journal of Computer Vision.

[10]  Naoki Asada,et al.  Edge and Depth from Focus , 2004, International Journal of Computer Vision.

[11]  Shree K. Nayar,et al.  Real-time focus range sensor , 1995, Proceedings of IEEE International Conference on Computer Vision.

[12]  Subhasis Chaudhuri,et al.  Depth From Defocus in Presence of Partial Self Occlusion , 2001, ICCV.

[13]  H.N. Nair,et al.  Robust focus ranging , 1992, Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[14]  Stefano Soatto,et al.  Shape and Radiance Estimation from the Information-Divergence of Blurred Images , 2000, ECCV.

[15]  Peter Lawrence,et al.  An Investigation of Methods for Determining Depth from Focus , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Stefano Soatto,et al.  Learning Shape from Defocus , 2002, ECCV.

[17]  Jean Ponce,et al.  Computer Vision: A Modern Approach , 2002 .

[18]  Subhasis Chaudhuri,et al.  Optimal recovery of depth from defocused images using an MRF model , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[19]  Yoav Y. Schechner,et al.  The optimal axial interval in estimating depth from defocus , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[20]  Alex Pentland,et al.  A New Sense for Depth of Field , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[21]  O. Faugeras Three-dimensional computer vision: a geometric viewpoint , 1993 .

[22]  M. Turk,et al.  A simple, real-time range camera , 1989, Proceedings CVPR '89: IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[23]  S. Shankar Sastry,et al.  An Invitation to 3-D Vision: From Images to Geometric Models , 2003 .

[24]  Subhasis Chaudhuri,et al.  Optimal selection of camera parameters for recovery of depth from defocused images , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[25]  Yen-Fu Liu,et al.  A unified approach to image focus and defocus analysis , 1998 .

[26]  Mario Bertero,et al.  Introduction to Inverse Problems in Imaging , 1998 .

[27]  Shree K. Nayar,et al.  Microscopic shape from focus using active illumination , 1994, Proceedings of 12th International Conference on Pattern Recognition.

[28]  Shree K. Nayar,et al.  Vision in bad weather , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[29]  Mats Gökstorp,et al.  Computing depth from out-of-focus blur using a local frequency representation , 1994, International Conference on Pattern Recognition.

[30]  Stefano Soatto,et al.  A geometric approach to blind deconvolution with application to shape from defocus , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[31]  Shree K. Nayar,et al.  Shape from focus: an effective approach for rough surfaces , 1990, Proceedings., IEEE International Conference on Robotics and Automation.

[32]  Alex Pentland,et al.  Simple range cameras based on focal error , 1994 .

[33]  Subhasis Chaudhuri,et al.  Depth From Defocus: A Real Aperture Imaging Approach , 1999, Springer New York.

[34]  Steven A. Shafer,et al.  Moment filters for high precision computation of focus and stereo , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[35]  Subhasis Chaudhuri,et al.  A block shift-variant blur model for recovering depth from defocused images , 1995, Proceedings., International Conference on Image Processing.

[36]  Steven A. Shafer,et al.  Depth from focusing and defocusing , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[37]  Bernard Heit,et al.  Monocular depth perception by evaluation of the blur in defocused images , 1994, Proceedings of 1st International Conference on Image Processing.

[38]  Andrew Blake,et al.  Visual Reconstruction , 1987, Deep Learning for EEG-Based Brain–Computer Interfaces.

[39]  D. Luenberger Optimization by Vector Space Methods , 1968 .