Deep Learning-Driven Depth from Defocus via Active Multispectral Quasi-Random Projections with Complex Subpatterns

A promising approach to depth from defocus (DfD) involves actively projecting a quasi-random point pattern onto an object and assessing the blurriness of the point projection as captured by a camera to recover the depth of the scene. Recently, it was found that the depth inference can be made not only faster but also more accurate by leveraging deep learning approaches to computationally model and predict depth based on the quasi-random point projections as captured by a camera. Motivated by the fact that deep learning techniques can automatically learn useful features from the captured image of the projection, in this paper we present an extension of this quasi-random projection approach to DfD by introducing the use of a new quasi-random projection pattern consisting of complex subpatterns instead of points. The design and choice of the subpattern used in the quasi-random projection is a key factor in the ability to achieve improved depth recovery with high fidelity. Experimental results using quasi-random projection patterns composed of a variety of non-conventional subpattern designs on complex surfaces showed that the use of complex subpatterns in the quasi-random projection pattern can significantly improve depth reconstruction quality compared to a point pattern.

[1]  Liu Jianzhuang,et al.  Automatic thresholding of gray-level pictures using two-dimension Otsu method , 1991, China., 1991 International Conference on Circuits and Systems.

[2]  Robert Bridson,et al.  Fast Poisson disk sampling in arbitrary dimensions , 2007, SIGGRAPH '07.

[3]  Stefano Soatto,et al.  Observing Shape from Defocused Images , 1999, Proceedings 10th International Conference on Image Analysis and Processing.

[4]  A. Ma,et al.  Depth from Defocus via Active Quasi-random Point Projections , 2016 .

[5]  David A. Clausi,et al.  Depth from Defocus via Active Quasi-random Point Projections: A Deep Learning Approach , 2017, ICIAR.

[6]  Paul F. Whelan,et al.  Computational approach for depth from defocus , 2005, J. Electronic Imaging.

[7]  Richard Szeliski,et al.  High-accuracy stereo depth maps using structured light , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[8]  A. Ma,et al.  Enhanced depth from defocus via active quasi-random colored point projections , 2017 .

[9]  Francesc Moreno-Noguer,et al.  Active refocusing of images and videos , 2007, ACM Trans. Graph..

[10]  Alex Pentland,et al.  Simple range cameras based on focal error , 1994 .

[11]  H. Niederreiter Point sets and sequences with small discrepancy , 1987 .

[12]  Joaquim Salvi,et al.  Pattern codification strategies in structured light systems , 2004, Pattern Recognit..

[13]  David A. Clausi,et al.  Depth from Defocus via Active Multispectral Quasi-random Point Projections using Deep Learning , 2017 .