VideoPlus: A Method for Capturing the Structure and Appearance of Immersive Environments

This paper describes an approach to capturing the appearance and structure of immersive environments based on the video imagery obtained with an omnidirectional camera system. The scheme proceeds by recovering the 3D positions of a set of point and line features in the world from image correspondences in a small set of key frames in the image sequence. Once the locations of these features have been recovered the position of the camera during every frame in the sequence can be determined by using these recovered features as fiducials and estimating camera pose based on the location of corresponding image features in each frame. The end result of the procedure is an omnidirectional video sequence where every frame is augmented with its pose with respect to an absolute reference frame and a 3D model of the environment composed of point and line features in the scene.By augmenting the video clip with pose information we provide the viewer with the ability to navigate the image sequence in new and interesting ways. More specifically the user can use the pose information to travel through the video sequence with a trajectory different from the one taken by the original camera operator. This freedom presents the end user with an opportunity to immerse themselves within a remote environment.

[1]  Camillo J. Taylor VideoPlus: A Method for Capturing the Structure and Appearance of Immersive Environments , 2002, IEEE Trans. Vis. Comput. Graph..

[2]  Long Quan,et al.  Image interpolation by joint view triangulation , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[3]  Tom,et al.  Epipolar Geometry for Panoramic Cameras Epipolar Geometry for Panoramic Cameras ? , 1998 .

[4]  Hiroshi Ishiguro,et al.  A strategy for acquiring an environmental model with panoramic sensing by a mobile robot , 1994, Proceedings of the 1994 IEEE International Conference on Robotics and Automation.

[5]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[6]  Kostas Daniilidis,et al.  A Unifying Theory for Central Panoramic Systems and Practical Applications , 2000, ECCV.

[7]  Shenchang Eric Chen,et al.  QuickTime VR: an image-based approach to virtual environment navigation , 1995, SIGGRAPH.

[8]  Katsushi Ikeuchi,et al.  Arbitrary view position and direction rendering for large-scale scenes , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[9]  Terrance E. Boult,et al.  Remote Reality via omni-directional imaging , 1998, SIGGRAPH '98.

[10]  Harry Shum,et al.  Rendering with concentric mosaics , 1999, SIGGRAPH.

[11]  Hiroshi Ishiguro,et al.  Omnidirectional visual information for navigating a mobile robot , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.

[12]  Hiroshi Ishiguro,et al.  Omni-Directional Stereo , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[13]  Camillo J. Taylor Video Plus , 2000 .

[14]  Yasushi Yagi,et al.  Real-time omnidirectional image sensor (COPIS) for vision-guided navigation , 1994, IEEE Trans. Robotics Autom..

[15]  Seth Teller,et al.  Automatic Extraction of Textured Vertical Facades from Pose Imagery , 1998 .

[16]  Paul Debevec,et al.  Modeling and Rendering Architecture from Photographs , 1996, SIGGRAPH 1996.

[17]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[18]  C. J. Taylor,et al.  Minimization on the Lie Group SO(3) and Related Manifolds , 1994 .

[19]  John E. Dennis,et al.  Numerical methods for unconstrained optimization and nonlinear equations , 1983, Prentice Hall series in computational mathematics.

[20]  Kostas Daniilidis,et al.  Catadioptric camera calibration , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[21]  Richard Szeliski,et al.  Creating full view panoramic image mosaics and environment maps , 1997, SIGGRAPH.

[22]  David J. Kriegman,et al.  Structure and motion from line segments in multiple images , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[23]  Richard Szeliski,et al.  Creating full view panoramic image mosaics and texture-mapped models , 1997, International Conference on Computer Graphics and Interactive Techniques.

[24]  Jitendra Malik,et al.  Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach , 1996, SIGGRAPH.

[25]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[26]  Andrew Lippman,et al.  Movie-maps: An application of the optical videodisc to computer graphics , 1980, SIGGRAPH '80.

[27]  Shree K. Nayar,et al.  Catadioptric omnidirectional camera , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.