Novel Stereoscopic View Generation by Image-Based Rendering Coordinated with Depth Information

This paper describes a method of stereoscopic view generation by image-based rendering in wide outdoor environments. The stereoscopic view can be generated from an omnidirectional image sequence by a light field rendering approach which generates a novel view image from a set of images. The conventional methods of novel view generation have a problem such that the generated image is distorted because the image is composed of parts of several omnidirectional images captured at different points. To overcome this problem, we have to consider the distances between the novel viewpoint and observed real objects in the rendering process. In the proposed method, in order to reduce the image distortion, stereoscopic images are generated considering depth values estimated by dynamic programming (DP) matching using the images that are observed from different points and contain the same ray information in the real world. In experiments, stereoscopic images in wide outdoor environments are generated and displayed.

[1]  Naokazu Yokoya,et al.  3D modeling of outdoor environments by integrating omnidirectional range and color images , 2005, Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05).

[2]  M. Landy,et al.  The Plenoptic Function and the Elements of Early Vision , 1991 .

[3]  Naokazu Yokoya,et al.  Immersive Telepresence System with a Lovomotion Interface Using High-resolution Omnidirectional Videos , 2005, MVA.

[4]  Shenchang Eric Chen,et al.  QuickTime VR: an image-based approach to virtual environment navigation , 1995, SIGGRAPH.

[5]  Leonard McMillan,et al.  Plenoptic Modeling: An Image-Based Rendering System , 2023 .

[6]  Sabry F. El-Hakim,et al.  A multi-sensor approach to creating accurate virtual environments 1 Revised version of a paper prese , 1998 .

[7]  Harry Shum,et al.  Rendering with concentric mosaics , 1999, SIGGRAPH.

[8]  S. Chiba,et al.  Dynamic programming algorithm optimization for spoken word recognition , 1978 .

[9]  Saied Moezzi,et al.  Immersive Telepresence , 1997, IEEE MultiMedia.

[10]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[11]  Naokazu Yokoya,et al.  An immersive telepresence system with a locomotion interface using high-resolution omnidirectional movies , 2004, ICPR 2004.

[12]  R. Bellman Dynamic programming. , 1957, Science.

[13]  Katsushi Ikeuchi,et al.  Driving view simulation synthesizing virtual geometry and real images in an experimental mixed-reality traffic space , 2005, Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR'05).

[14]  S. Ed. Moezzi Special Issue on Immersive Telepresence , 1997 .

[15]  Stuart E. Dreyfus,et al.  Applied Dynamic Programming , 1965 .

[16]  Naokazu Yokoya,et al.  Real-time generation and presentation of view-dependent binocular stereo images using a sequence of omnidirectional images , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[17]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[18]  Ryosuke Shibasaki,et al.  Reconstruction ot Textured Urban 3D Model Fusing Ground-Based Laser Range and CCD Images , 2000 .