A Combined Theory of Defocused Illumination and Global Light Transport

Projectors are increasingly being used as light-sources in computer vision applications. In several applications, they are modeled as point light sources, thus ignoring the effects of illumination defocus. In addition, most active vision techniques assume that a scene point is illuminated only directly by the light source, thus ignoring global light transport effects. Since both defocus and global illumination co-occur in virtually all scenes illuminated by projectors, ignoring them can result in strong, systematic biases in the recovered scene properties. To make computer vision techniques work for general real world scenes, it is thus important to account for both these effects.In this paper, we study the interplay between defocused illumination and global light transport. We show that both these seemingly disparate effects can be expressed as low pass filters on the incident illumination. Using this observation, we derive an invariant between the two effects, which can be used to separate the two. This is directly useful in scenarios where limited depth-of-field devices (such as projectors) are used to illuminate scenes with global light transport and significant depth variations. We show applications in two scenarios: (a) accurate depth recovery in the presence of global light transport, and (b) factoring out the effects of illumination defocus for correct direct-global component separation. We demonstrate our approach using scenes with complex shapes, reflectance properties, textures and translucencies.

[1]  Hans-Peter Seidel,et al.  Eurographics Symposium on Rendering 2008 Combining Confocal Imaging and Descattering , 2022 .

[2]  Shree K. Nayar,et al.  Vision and the Atmosphere , 2002, International Journal of Computer Vision.

[3]  Pietro Perona,et al.  3D photography on your desk , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[4]  Shree K. Nayar,et al.  Acquiring scattering properties of participating media by dilution , 2006, ACM Trans. Graph..

[5]  M. Levoy,et al.  An assessment of laser range measurement on marble surfaces , 2001 .

[6]  Shree K. Nayar,et al.  Projection defocus analysis for scene capture and image display , 2006, SIGGRAPH 2006.

[7]  Robert J. Woodham,et al.  Photometric method for determining surface orientation from multiple images , 1980 .

[8]  Kiriakos N. Kutulakos,et al.  A theory of inverse light transport , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[9]  Yasuyuki Matsushita,et al.  Shape from Second-Bounce of Light Transport , 2010, ECCV.

[10]  Hans-Peter Seidel,et al.  Polarization and Phase-Shifting for 3D Scanning of Translucent Objects , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[11]  Ramesh Raskar,et al.  Fast separation of direct and global components of a scene using high frequency illumination , 2006, ACM Trans. Graph..

[12]  Shree K. Nayar,et al.  Compressive Structured Light for Recovering Inhomogeneous Participating Media , 2008, ECCV.

[13]  Yasushi Yagi,et al.  Analysis of light transport in scattering media , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[14]  Kiriakos N. Kutulakos,et al.  Optical computing for fast light transport analysis , 2010, SIGGRAPH 2010.

[15]  H. Seidel,et al.  Fluorescent immersion range scanning , 2008, ACM Trans. Graph..

[16]  Hans-Peter Seidel,et al.  Modulated phase-shifting for 3D scanning , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  Hans-Peter Seidel,et al.  Time-resolved 3d capture of non-stationary gas flows , 2008, SIGGRAPH 2008.

[18]  Hans Westman,et al.  SIGGRAPH Asia , 2007, COMG.

[19]  Murali Subbarao,et al.  Computer modeling and simulation of camera defocus , 1993, Other Conferences.

[20]  Yoav Y. Schechner,et al.  Depth from Defocus vs. Stereo: How Different Really Are They? , 2004, International Journal of Computer Vision.

[21]  Kiriakos N. Kutulakos,et al.  Reconstructing the Surface of Inhomogeneous Transparent Scenes by Scatter-Trace Photography , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[22]  Marc Levoy,et al.  Dual photography , 2005, SIGGRAPH 2005.

[23]  Paul Debevec,et al.  Acquisition of time-varying participating media , 2005, SIGGRAPH 2005.

[24]  Kiriakos N. Kutulakos,et al.  Confocal Stereo , 2006, International Journal of Computer Vision.

[25]  Shree K. Nayar,et al.  Structured light in scattering media , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[26]  Shree K. Nayar,et al.  Rational Filters for Passive Depth from Defocus , 1998, International Journal of Computer Vision.

[27]  Marc Levoy,et al.  Synthetic aperture confocal imaging , 2004, SIGGRAPH 2004.

[28]  Tian-Tsong Ng,et al.  A Dual Theory of Inverse and Forward Light Transport , 2010, ECCV.

[29]  Yuandong Tian,et al.  (De) focusing on global light transport for active scene recovery , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[30]  Shree K. Nayar,et al.  Shape from Focus , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[31]  Arthur C. Hardy How Large is a Point Source , 1967 .

[32]  Kiriakos N. Kutulakos,et al.  A Theory of Refractive and Specular 3D Shape by Light-Path Triangulation , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[33]  Frédo Durand,et al.  Image and depth from a conventional camera with a coded aperture , 2007, SIGGRAPH 2007.

[34]  Yoav Y. Schechner,et al.  Instant 3Descatter , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[35]  Marc Levoy,et al.  Symmetric photography: exploiting data-sparseness in reflectance fields , 2006, EGSR '06.

[36]  Berthold K. P. Horn Obtaining shape from shading information , 1989 .

[37]  Takeo Kanade,et al.  Shape from interreflections , 2004, International Journal of Computer Vision.