View Generation for 3-d Scenes from Video Sequences

This paper focuses on a representation for 3-D scenes consisting of dense depth maps at preselected viewpoints from video sequences derived from unknown but approximately horizontal motion. In contrast to existing methods that construct a full 3-D model or those that exploit geometric invariants, an intensity-depth representation is used to generate arbitrary views of the 3-D scene. Speciically, the depth maps are regarded as vertices of a deformable 2-D mesh which are transformed in 3-D, projected to 2-D, and rendered to generate the desired view. Experimental results are presented to verify our approach.

[1]  Avideh Zakhor,et al.  View generation for three-dimensional scenes from video sequences , 1997, IEEE Trans. Image Process..

[2]  Katsushi Ikeuchi,et al.  Building 3-D models from unregistered range images , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[3]  Olivier D. Faugeras,et al.  3-D scene representation as a collection of images , 1994, Proceedings of 12th International Conference on Pattern Recognition.

[4]  Nelson L. Chang View Reconstruction from Uncalibrated Cameras for Three-Dimensional Scenes. , 1994 .