(De) focusing on global light transport for active scene recovery

Most active scene recovery techniques assume that a scene point is illuminated only directly by the illumination source. Consequently, global illumination effects due to inter-reflections, sub-surface scattering and volumetric scattering introduce strong biases in the recovered scene shape. Our goal is to recover scene properties in the presence of global illumination. To this end, we study the interplay between global illumination and the depth cue of illumination defocus. By expressing both these effects as low pass filters, we derive an approximate invariant that can be used to separate them without explicitly modeling the light transport. This is directly useful in any scenario where limited depth-of-field devices (such as projectors) are used to illuminate scenes with global light transport and significant depth variations. We show two applications: (a) accurate depth recovery in the presence of global illumination, and (b) factoring out the effects of defocus for correct direct-global separation in large depth scenes. We demonstrate our approach using scenes with complex shapes, reflectances, textures and translucencies.

[1]  Berthold K. P. Horn Obtaining shape from shading information , 1989 .

[2]  Marc Levoy,et al.  Symmetric photography: exploiting data-sparseness in reflectance fields , 2006, EGSR '06.

[3]  Hans-Peter Seidel,et al.  Modulated phase-shifting for 3D scanning , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Yoav Y. Schechner,et al.  Depth from Defocus vs. Stereo: How Different Really Are They? , 2004, International Journal of Computer Vision.

[5]  Marisa E. Campbell,et al.  SIGGRAPH 2004 , 2004, INTR.

[6]  Paul Debevec,et al.  Acquisition of time-varying participating media , 2005, SIGGRAPH 2005.

[7]  Hans-Peter Seidel,et al.  Polarization and Phase-Shifting for 3D Scanning of Translucent Objects , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  Marc Levoy,et al.  Dual photography , 2005, SIGGRAPH 2005.

[9]  Takeo Kanade,et al.  Shape from interreflections , 2004, International Journal of Computer Vision.

[10]  Kiriakos N. Kutulakos,et al.  A theory of inverse light transport , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[11]  Shree K. Nayar,et al.  Compressive Structured Light for Recovering Inhomogeneous Participating Media , 2008, ECCV.

[12]  Hans-Peter Seidel,et al.  Eurographics Symposium on Rendering 2008 Combining Confocal Imaging and Descattering , 2022 .

[13]  Ramesh Raskar,et al.  Fast separation of direct and global components of a scene using high frequency illumination , 2006, SIGGRAPH 2006.

[14]  Shree K. Nayar,et al.  Projection defocus analysis for scene capture and image display , 2006, SIGGRAPH 2006.

[15]  Robert J. Woodham,et al.  Photometric method for determining surface orientation from multiple images , 1980 .

[16]  H. Seidel,et al.  Fluorescent immersion range scanning , 2008, ACM Trans. Graph..

[17]  Hans-Peter Seidel,et al.  Time-resolved 3d capture of non-stationary gas flows , 2008, SIGGRAPH 2008.

[18]  Shree K. Nayar,et al.  Shape from Focus , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  Shree K. Nayar,et al.  Rational Filters for Passive Depth from Defocus , 1998, International Journal of Computer Vision.

[20]  Marc Levoy,et al.  Synthetic aperture confocal imaging , 2004, SIGGRAPH 2004.