Toward image-based scene representation using view morphing

The question of which views map be inferred from a set of basis images is addressed. Under certain conditions, a discrete set of images implicitly describes scene appearance for a continuous range of viewpoints. In particular it is demonstrated that two basis views of a static scene determine the set of all views an the line between their optical centers. Additional basis views further extend the range of predictable views to a two- or three-dimensional region of viewspace. These results are shown to apply under perspective projection subject to a generic visibility constraint called monotonicity. In addition, a simple scanline algorithm is presented for actually generating these views from a set of basis images. The technique, called view morphing map be applied to both calibrated and uncalibrated images. At a minimum, two basis views and their fundamental matrix are needed. Experimental results are presented an real images. This work provides a theoretical foundation for image-based representations of 3D scenes by demonstrating that perspective view synthesis is a theoretically well-posed problem.

[1]  Takeo Kanade,et al.  Stereo by Intra- and Inter-Scanline Search Using Dynamic Programming , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[2]  Steven M. Seitz,et al.  View morphing , 1996, SIGGRAPH.

[3]  Michal Irani,et al.  Representation of scenes from collections of images , 1995, Proceedings IEEE Workshop on Representation of Visual Scenes (In Conjunction with ICCV'95).

[4]  George Wolberg,et al.  Digital image warping , 1990 .

[5]  Leonard McMillan,et al.  Head-tracked stereoscopic display using image warping , 1995, Electronic Imaging.

[6]  Tomaso Poggio,et al.  Example Based Image Analysis and Synthesis , 1993 .

[7]  M. Hebert,et al.  Applications of Non-Metric Vision to some Visually Guided Robotics Tasks , 1995 .

[8]  Edward H. Adelson,et al.  Layered representation for motion analysis , 1993, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Václav Hlavác,et al.  Rendering real-world objects using view interpolation , 1995, Proceedings of IEEE International Conference on Computer Vision.

[10]  Andrew Zisserman,et al.  Motion From Point Matches Using Affine Epipolar Geometry , 1994, ECCV.

[11]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[12]  Jitendra Malik,et al.  Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach , 1996, SIGGRAPH.

[13]  Lance Williams,et al.  View Interpolation for Image Synthesis , 1993, SIGGRAPH.

[14]  Tomaso Poggio,et al.  Computational vision and regularization theory , 1985, Nature.

[15]  Richard Szeliski,et al.  Video mosaics for virtual environments , 1996, IEEE Computer Graphics and Applications.

[16]  Richard I. Hartley,et al.  In defence of the 8-point algorithm , 1995, Proceedings of IEEE International Conference on Computer Vision.

[17]  Leonard McMillan,et al.  Plenoptic Modeling: An Image-Based Rendering System , 2023 .

[18]  S. P. Mudur,et al.  Three-dimensional computer vision: a geometric viewpoint , 1993 .

[19]  Shenchang Eric Chen,et al.  QuickTime VR: an image-based approach to virtual environment navigation , 1995, SIGGRAPH.

[20]  H. C. Longuet-Higgins,et al.  A computer algorithm for reconstructing a scene from two projections , 1981, Nature.

[21]  Hideyuki Tamura,et al.  Viewpoint-dependent stereoscopic display using interpolation of multiviewpoint images , 1995, Electronic Imaging.

[22]  Steven M. Seitz,et al.  Physically-valid view synthesis by image interpolation , 1995, Proceedings IEEE Workshop on Representation of Visual Scenes (In Conjunction with ICCV'95).

[23]  Olivier D. Faugeras,et al.  3-D scene representation as a collection of images , 1994, Proceedings of 12th International Conference on Pattern Recognition.

[24]  Thomas O. Binford,et al.  Depth from Edge and Intensity Based Stereo , 1981, IJCAI.