High-resolution modeling of moving and deforming objects using sparse geometric and dense photometric measurements

Modeling moving and deforming objects requires capturing as much information as possible during a very short time. When using off-the-shelf hardware, this often hinders the resolution and accuracy of the acquired model. Our key observation is that in as little as four frames both sparse surface-positional measurements and dense surface-orientation measurements can be acquired using a combination of structured light and photometric stereo, resulting in high-resolution models of moving and deforming objects. Our system projects alternating geometric and photometric patterns onto the object using a set of three projectors and captures the object using a synchronized camera. Small motion among temporally close frames is compensated by estimating the optical flow of images captured under the uniform illumination of the photometric light. Then spatial-temporal photogeometric reconstructions are performed to obtain dense and accurate point samples with a sampling resolution equal to that of the camera. Temporal coherence is also enforced. We demonstrate our system by successfully modeling several moving and deforming real-world objects.

[1]  Daniel G. Aliaga,et al.  A Self-Calibrating Method for Photogeometric Acquisition of 3D Objects , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Rama Chellappa,et al.  A Method for Enforcing Integrability in Shape from Shading Algorithms , 1988, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  Luc Van Gool,et al.  Fast 3D Scanning with Automatic Motion Compensation , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[4]  Li Zhang,et al.  Shape and motion under varying illumination: unifying structure from motion, photometric stereo, and multiview stereo , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[5]  Diego F. Nehab,et al.  Efficiently combining positions and normals for precise 3D geometry , 2005, SIGGRAPH 2005.

[6]  Hans-Peter Seidel,et al.  Performance capture from sparse multi-view video , 2008, SIGGRAPH 2008.

[7]  Yasushi Yagi,et al.  Dense 3D reconstruction method using a single pattern for fast moving object , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[8]  Szymon Rusinkiewicz,et al.  Stripe boundary codes for real-time structured-light range scanning of moving objects , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[9]  Shree K. Nayar,et al.  A Projection System with Radiometric Compensation for Screen Imperfections , 2003 .

[10]  David J. Kriegman,et al.  Beyond Lambert: reconstructing specular surfaces using color , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[11]  Björn Stenger,et al.  Non-rigid Photometric Stereo with Colored Lights , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[12]  Hans-Peter Seidel,et al.  Performance capture from sparse multi-view video , 2008, ACM Trans. Graph..

[13]  Hans-Peter Seidel,et al.  Robust fusion of dynamic shape and normal capture for high-quality reconstruction of time-varying geometry , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Li Zhang,et al.  Spacetime faces: high resolution capture for modeling and animation , 2004, SIGGRAPH 2004.

[15]  Szymon Rusinkiewicz,et al.  Spacetime stereo: a unifying framework for depth from triangulation , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Holly E. Rushmeier,et al.  Computing consistent normals and colors from photometric data , 1999, Second International Conference on 3-D Digital Imaging and Modeling (Cat. No.PR00062).

[17]  David J. Kriegman,et al.  Shape from Varying Illumination and Viewpoint , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[18]  Yasuyuki Matsushita,et al.  A hand-held photometric stereo camera for 3-D modeling , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[19]  Takashi Matsuyama,et al.  Complete multi-view reconstruction of dynamic scenes from probabilistic fusion of narrow and wide baseline stereo , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[20]  Szymon Rusinkiewicz,et al.  Efficiently combining positions and normals for precise 3D geometry , 2005, ACM Trans. Graph..

[21]  David J. Kriegman,et al.  The Bas-Relief Ambiguity , 2004, International Journal of Computer Vision.

[22]  Pieter Peers,et al.  Dynamic shape capture using multi-view photometric stereo , 2009, ACM Trans. Graph..

[23]  Szymon Rusinkiewicz,et al.  Spacetime Stereo: A Unifying Framework for Depth from Triangulation , 2005, IEEE Trans. Pattern Anal. Mach. Intell..

[24]  Andrew Gardner,et al.  Simulating spatially varying lighting on a live performance , 2006 .

[25]  Andrew Gardner,et al.  Performance relighting and reflectance transformation with time-multiplexed illumination , 2005, ACM Trans. Graph..

[26]  David J. Kriegman,et al.  Passive photometric stereo from motion , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[27]  Luc Van Gool,et al.  Real-time range acquisition by adaptive structured light , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.