Geotensity: combining motion and lighting for 3D surface reconstruction

This paper is about automatically reconstructing the full 3D surface of an object observed in motion by a single static camera. We introduce the geotensity constraint that governs the relationship between four images of a moving object under fairly general lighting conditions. We show that it is possible in theory to solve for 3D surface structure for both the case of a single point light source and a pair of point, light sources and propose that a solution exists for an arbitrary number point light sources. The surface may or may not be textured. We then give an example of automatic surface reconstruction of a face under a point light source. The geotensity constraint provides the theoretical foundation for the full automatic 3D reconstruction of Lambertian objects using a single fixed camera, and arbitrary unknown object motion under arbitrary lighting conditions.

[1]  A. Shashua Geometry and Photometry in 3D Visual Recognition , 1992 .

[2]  Takeo Kanade,et al.  Determining shape and reflectance of hybrid surfaces by photometric sampling , 1989, IEEE Trans. Robotics Autom..

[3]  Daphna Weinshall,et al.  Linear and incremental acquisition of invariant shape models from image sequences , 1993, 1993 (4th) International Conference on Computer Vision.

[4]  Katsushi Ikeuchi,et al.  Determining Surface Orientations of Specular Surfaces by Using the Photometric Stereo Method , 1981, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Kiriakos N. Kutulakos,et al.  Plenoptic Image Editing , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[6]  Alex Pentland,et al.  Local Shading Analysis , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Alex Pentland Photometric Motion , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[8]  Paul A. Beardsley,et al.  3D Model Acquisition from Extended Image Sequences , 1996, ECCV.

[9]  Atsuto Maki,et al.  Hyper-patches for 3D model acquisition and tracking , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[10]  Katsushi Ikeuchi,et al.  Numerical Shape from Shading and Occluding Boundaries , 1981, Artif. Intell..

[11]  Olivier Faugeras,et al.  Three-Dimensional Computer Vision , 1993 .

[12]  Andrew Zisserman,et al.  Geometric invariance in computer vision , 1992 .

[13]  Russell A. Epstein,et al.  5/spl plusmn/2 eigenimages suffice: an empirical investigation of low-dimensional lighting models , 1995, Proceedings of the Workshop on Physics-Based Modeling in Computer Vision.

[14]  Robert J. Woodham,et al.  Photometric method for determining surface orientation from multiple images , 1980 .

[15]  David J. Kriegman,et al.  What is the set of images of an object under all possible lighting conditions? , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[16]  Berthold K. P. Horn Image Intensity Understanding , 1975 .

[17]  Olivier D. Faugeras,et al.  Computing differential properties of 3-D shapes from stereoscopic images without 3-D models , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.