Lag Camera: A Moving Multi-Camera Array for Scence-Acquisition

JVRB, 3(2006), no. 10. - Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional(3D)model of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

[1]  Roger Y. Tsai,et al.  A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses , 1987, IEEE J. Robotics Autom..

[2]  Wojciech Matusik,et al.  3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes , 2004, ACM Trans. Graph..

[3]  Anselmo Lastra,et al.  Proceedings of the 2005 symposium on Interactive 3D graphics and games , 2005, I3D 2005.

[4]  David Salesin,et al.  Panoramic video textures , 2005, SIGGRAPH 2005.

[5]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[6]  Leonard McMillan,et al.  A Real-Time Distributed Light Field Camera , 2002, Rendering Techniques.

[7]  Szymon Rusinkiewicz,et al.  Spacetime stereo: a unifying framework for depth from triangulation , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[8]  Li Zhang,et al.  Spacetime stereo: shape recovery for dynamic scenes , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[9]  Huamin Wang,et al.  Towards space: time light field rendering , 2005, I3D '05.

[10]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[11]  NaemuraTakeshi,et al.  Real-Time Video-Based Modeling and Rendering of 3D Scenes , 2002 .

[12]  David Salesin,et al.  Panoramic video textures , 2005, ACM Trans. Graph..

[13]  Andrew S. Glassner,et al.  Proceedings of the 27th annual conference on Computer graphics and interactive techniques , 1994, SIGGRAPH 1994.

[14]  Richard Szeliski,et al.  Video textures , 2000, SIGGRAPH.

[15]  Harry Shum,et al.  Video object cut and paste , 2005, ACM Trans. Graph..

[16]  Takeshi Naemura,et al.  Real-Time Video-Based Modeling and Rendering of 3D Scenes , 2002, IEEE Computer Graphics and Applications.

[17]  Wojciech Matusik,et al.  3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes , 2004, ACM Trans. Graph..

[18]  Michael Bosse,et al.  Unstructured lumigraph rendering , 2001, SIGGRAPH.

[19]  Maneesh Agrawala,et al.  Interactive video cutout , 2005, ACM Trans. Graph..