The generation of depth maps via depth-from-defocus

The principle aim of this study was to use the concept of image defocus related to feature depth in order to develop a system capable of converting a 2-dimensional greyscale image into a 3-dimensional depth map. An advantage of this concept (known as depth-from-defocus or simply DfD) over techniques such as stereo imaging is that there is no so-called ‘correspondence problem’ where the corresponding location of a feature or landmark point must be identified in each of the stereo images. The majority – and the most successful – of previous researchers in DfD have used some variation of a ‘two-image’ technique in order to separate the contribution of the original scene features from the defocus effect. The best of those have achieved results typically in the range of 1% to 2% error in the accuracy of depth estimation. This thesis presents a single-image method of generating a high-density, highaccuracy depth map via the evaluation of the edge profiles of a projected structured light pattern. A novel technique of moving the projected pattern during the image capture stage allows the development of a 4-dimensional look-up table. This technique offers a solution to one of the last remaining problems in DfD, that of spatial variance. It also uses a technique to remove the dependence of original scene reflectance. The final solution generates a depth map of up to 240,000 spatially invariant depth estimates per scene image, with an accuracy of within ± 0.5%, over a depth range of 10 cm. The depth map is generated in a processing time of approximately 14 seconds once the images are loaded.

[1]  C. Simon,et al.  Estimation of depth on thick edges from sharp and blurred images , 2002, IMTC/2002. Proceedings of the 19th IEEE Instrumentation and Measurement Technology Conference (IEEE Cat. No.00CH37276).

[2]  Li Ma,et al.  Integration of multiresolution image segmentation and neural networks for object depth recovery , 2005, Pattern Recognit..

[3]  Du-Ming Tsai,et al.  A moment-preserving approach for depth from defocus , 1998, Pattern Recognit..

[4]  Francisco J. Cuevas,et al.  Depth object recovery using radial basis functions , 1999 .

[5]  Francisco J. Cuevas,et al.  Multi-layer neural network applied to phase and depth recovery from fringe patterns , 2000 .

[6]  Richard Kowarschik,et al.  Adaptive optical 3-D-measurement with structured light , 2000 .

[7]  Stefano Soatto,et al.  Learning Shape from Defocus , 2002, ECCV.

[8]  Stefano Soatto,et al.  A geometric approach to shape from defocus , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  S. K. Mullick,et al.  Estimation of depth from defocus as polynomial system identification , 2001 .

[10]  Eduardo Bayro-Corrochano,et al.  Self-organizing neural-network-based pattern clustering method with fuzzy outputs , 1994, Pattern Recognit..

[11]  Alex Pentland,et al.  A New Sense for Depth of Field , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Katia Genovese,et al.  Axial stereo-photogrammetry for 360° measurement on tubular samples , 2007 .

[13]  Subhasis Chaudhuri,et al.  A Variational Approach to Recovering Depth From Defocused Images , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[14]  Paulo Roberto Gardel Kurka,et al.  Three-Dimensional Volume and Position Recovering Using a Virtual Reference Box , 2007, IEEE Transactions on Image Processing.

[15]  Djemel Ziou,et al.  An unified approach for a simultaneous and cooperative estimation of defocus blur and spatial shifts , 2004, Image Vis. Comput..

[16]  Paul F. Whelan,et al.  A video-rate range sensor based on depth from defocus , 2001 .

[17]  Murali Subbarao Parallel Depth Recovery By Changing Camera Parameters , 1988, [1988 Proceedings] Second International Conference on Computer Vision.

[18]  Shree K. Nayar,et al.  Real-Time Focus Range Sensor , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Bahadir Ergun An expert measurement system for photogrammetric industrial application , 2006 .

[20]  Djemel Ziou,et al.  Passive depth from defocus using a spatial domain approach , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[21]  Shree K. Nayar,et al.  Minimal operator set for passive depth from defocus , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[22]  M. Turk,et al.  A simple, real-time range camera , 1989, Proceedings CVPR '89: IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[23]  Alex Pentland,et al.  Simple range cameras based on focal error , 1994 .

[24]  M.A. Sid-Ahmed,et al.  3-D position sensing using a single camera approach , 1989, Proceedings of the 32nd Midwest Symposium on Circuits and Systems,.

[25]  Murali Subbarao,et al.  Depth from defocus by changing camera aperture: a spatial domain approach , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[26]  Chen Li-wu Optical System Design of Space Three-Dimensional Photographic , 2007 .

[27]  Hailin Jin,et al.  A Variational Approach to Shape from Defocus , 2002, ECCV.

[28]  Tae-Sun Choi,et al.  Shape from focus using multilayer feedforward neural networks , 2000, IEEE Trans. Image Process..

[29]  Bernd Girod,et al.  Depth from Defocus of Structured Light , 1990, Other Conferences.

[30]  Subhasis Chaudhuri,et al.  An MRF Model-Based Approach to Simultaneous Recovery of Depth and Restoration from Defocused Images , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[31]  K. L. Edmundson Beyond inspection: the promise of videometrics for industry and engineering , 2007, Electronic Imaging.

[32]  Shree K. Nayar,et al.  Projection defocus analysis for scene capture and image display , 2006, SIGGRAPH 2006.

[33]  Christian Teutsch,et al.  A flexible photogrammetric stereo vision system for capturing the 3D shape of extruded profiles , 2006, SPIE Optics East.

[34]  Yongjun Zhang,et al.  Automatic measurement of industrial sheetmetal parts with CAD data and non-metric image sequence , 2006, Comput. Vis. Image Underst..

[35]  Alfonso Serrano-Heredia,et al.  Recovery of three-dimensional shapes by using defocused structured light , 1998 .

[36]  Duc Truong Pham,et al.  Depth from defocusing using a neural network , 1999, Pattern Recognit..

[37]  David R. Burton,et al.  Applying backpropagation neural networks to fringe analysis , 1994, Other Conferences.

[38]  C. Mair,et al.  Diffraction-limited depth-from-defocus , 2000 .

[39]  Terry Caelli,et al.  Range measurement from defocus gradient , 1995, Machine Vision and Applications.

[40]  Peter Lawrence,et al.  An Investigation of Methods for Determining Depth from Focus , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[41]  Shree K. Nayar,et al.  Telecentric Optics for Focus Analysis , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[42]  Murali Subbarao,et al.  Focused image recovery from two defocused images recorded with different camera settings , 1995, IEEE Transactions on Image Processing.