Light field mapping: efficient representation and hardware rendering of surface light fields

A light field parameterized on the surface offers a natural and intuitive description of the view-dependent appearance of scenes with complex reflectance properties. To enable the use of surface light fields in real-time rendering we develop a compact representation suitable for an accelerated graphics pipeline. We propose to approximate the light field data by partitioning it over elementary surface primitives and factorizing each part into a small set of lower-dimensional functions. We show that our representation can be further compressed using standard image compression techniques leading to extremely compact data sets that are up to four orders of magnitude smaller than the input data. Finally, we develop an image-based rendering method, light field mapping, that can visualize surface light fields directly from this compact representation at interactive frame rates on a personal computer. We also implement a new method of approximating the light field data that produces positive only factors allowing for faster rendering using simpler graphics hardware than earlier methods. We demonstrate the results for a variety of non-trivial synthetic scenes and physical objects scanned through 3D photography.

[1]  John Spitzer Texture Compositing With Register Combiners , 2000 .

[2]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..

[3]  Thomas Malzbender,et al.  Polynomial texture maps , 2001, SIGGRAPH.

[4]  Gavin S. P. Miller,et al.  Lazy Decompression of Surface Light Fields for Precomputed Global Illumination , 1998, Rendering Techniques.

[5]  Hans-Peter Seidel,et al.  Image-Based Reconstruction of Spatially Varying Materials , 2001 .

[6]  Gabriel Taubin,et al.  Geometric compression through topological surgery , 1998, TOGS.

[7]  Alain Fournier,et al.  Separating Reflection Functions for Linear Radiosity , 1995, Rendering Techniques.

[8]  Linda G. Shapiro,et al.  View-base Rendering: Visualizing Real Objects from Scanned Range and Color Data , 1997, Rendering Techniques.

[9]  Allen Gersho,et al.  Vector quantization and signal compression , 1991, The Kluwer international series in engineering and computer science.

[10]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[11]  PeronaPietro,et al.  3D Photography Using Shadows in Dual-Space Geometry , 1999 .

[12]  Hans-Peter Seidel,et al.  Towards interactive bump mapping with anisotropic shift-variant BRDFs , 2000, Workshop on Graphics Hardware.

[13]  K. Torrance,et al.  Polarization, Directional Distribution, and Off-Specular Peak Phenomena in Light Reflected from Roughened Surfaces , 1966 .

[14]  Donald P. Greenberg,et al.  Non-linear approximation of reflectance functions , 1997, SIGGRAPH.

[15]  Robert L. Cook,et al.  A Reflectance Model for Computer Graphics , 1987, TOGS.

[16]  David Salesin,et al.  Surface light fields for 3D photography , 2000, SIGGRAPH.

[17]  Michael I. Jordan,et al.  Advances in Neural Information Processing Systems 30 , 1995 .

[18]  Pietro Perona,et al.  3D Photography Using Shadows in Dual-Space Geometry , 1999, International Journal of Computer Vision.

[19]  Pat Hanrahan,et al.  A signal-processing framework for inverse rendering , 2001, SIGGRAPH.

[20]  Paul E. Debevec,et al.  Acquiring the reflectance field of a human face , 2000, SIGGRAPH.

[21]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[22]  Yizhou Yu,et al.  Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping , 1998, Rendering Techniques.

[23]  Katsushi Ikeuchi,et al.  Eigen-texture method: Appearance compression based on 3D model , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[24]  Marcus A. Magnor,et al.  Data compression for light-field rendering , 2000, IEEE Trans. Circuits Syst. Video Technol..

[25]  H. Sebastian Seung,et al.  Learning the parts of objects by non-negative matrix factorization , 1999, Nature.

[26]  André Gagalowicz,et al.  Image-based rendering of diffuse, specular and glossy surfaces from a single image , 2001, SIGGRAPH.

[27]  Jan Kautz,et al.  Interactive rendering with arbitrary BRDFs using separable approximations , 1999, SIGGRAPH '99.

[28]  Harry Shum,et al.  Plenoptic sampling , 2000, SIGGRAPH.

[29]  Michael D. McCool,et al.  Homomorphic factorization of BRDFs for high-performance rendering , 2001, SIGGRAPH.

[30]  Paul Debevec,et al.  Inverse global illumination: Recovering re?ectance models of real scenes from photographs , 1998 .

[31]  Leonard McMillan,et al.  Plenoptic Modeling: An Image-Based Rendering System , 2023 .

[32]  Michael Bosse,et al.  Unstructured lumigraph rendering , 2001, SIGGRAPH.

[33]  Peter Schröder,et al.  Spherical wavelets: efficiently representing functions on the sphere , 1995, SIGGRAPH.

[34]  Sam T. Roweis,et al.  EM Algorithms for PCA and SPCA , 1997, NIPS.

[35]  Jitendra Malik,et al.  Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach , 1996, SIGGRAPH.

[36]  Hans-Peter Seidel,et al.  Realistic, hardware-accelerated shading and lighting , 1999, SIGGRAPH.

[37]  Andrea J. van Doorn,et al.  Bidirectional Reflection Distribution Function Expressed in Terms of Surface Scattering Modes , 1996, ECCV.

[38]  Katsushi Ikeuchi,et al.  Object shape and reflectance modeling from observation , 1997, SIGGRAPH.

[39]  Marc Levoy,et al.  Better optical triangulation through spacetime analysis , 1995, Proceedings of IEEE International Conference on Computer Vision.

[40]  Pierre Poulin,et al.  A model for anisotropic reflection , 1990, SIGGRAPH.

[41]  Gregory J. Ward,et al.  Measuring and modeling anisotropic reflection , 1992, SIGGRAPH.