Unstructured light field rendering using on-the-fly focus measurement

This paper introduces a novel image-based rendering method which uses inputs from unstructured cameras and synthesizes free-viewpoint images of high quality. Our method uses a set of depth layers in order to deal with scenes with large depth ranges. To each pixel on the synthesized image, the optimal depth layer is assigned automatically based on the on-the-fly focus measurement algorithm that we propose. We implemented this method efficiently on a PC and achieved nearly interactive frame-rates.

[1]  Richard Szeliski,et al.  Layered depth images , 1998, SIGGRAPH.

[2]  Harry Shum,et al.  Plenoptic sampling , 2000, SIGGRAPH.

[3]  Leonard McMillan,et al.  A Real-Time Distributed Light Field Camera , 2002, Rendering Techniques.

[4]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[5]  Takeshi Naemura,et al.  A focus measure for light field rendering , 2004, 2004 International Conference on Image Processing, 2004. ICIP '04..

[6]  Leonard McMillan,et al.  Dynamically reparameterized light fields , 2000, SIGGRAPH.

[7]  Tsuhan Chen,et al.  A Self-Reconfigurable Camera Array , 2004, Rendering Techniques.

[8]  Steven M. Seitz,et al.  Photorealistic Scene Reconstruction by Voxel Coloring , 1997, International Journal of Computer Vision.

[9]  Harry Shum,et al.  Layered lumigraph with LOD control , 2002, Comput. Animat. Virtual Worlds.

[10]  Takeshi Naemura,et al.  LIFLET: light field live with thousands of lenslets , 2004, SIGGRAPH '04.

[11]  Takeshi Naemura,et al.  Real-Time Video-Based Modeling and Rendering of 3D Scenes , 2002, IEEE Computer Graphics and Applications.