Synthesizing Environment Maps from a Single Image

Environment mapping is a popular technique for creating consistent lighting when compositing a virtual object into a real scene. However, capturing an environment map usually requires physical access to the scene to obtain illumination measurements. But what if all one has available is a single photograph of the scene? In this paper, we study techniques for synthesizing plausible environment maps from a single image. By analogy with texture synthesis, the goal is to use the small amount of available data to generate an environment map that would be likely to have come from that scene. In particular, we are interested in understanding the role of geometric information in constructing visually realistic environment maps. To this end, we implement several environment synthesis strategies that employ varying amounts of 3D scene geometry information. We measure the quality of the synthesized results by using human subjects to evaluate the appearance of objects illuminated with different environment maps, in still images as well as in video.

[1]  Ron O Dror,et al.  Statistical characterization of real-world illumination. , 2004, Journal of vision.

[2]  P. Cavanagh The artist as neuroscientist , 2005, Nature.

[3]  James F. Blinn,et al.  Texture and reflection in computer generated images , 1976, CACM.

[4]  Shree K. Nayar,et al.  The World in an Eye , 2004, CVPR.

[5]  Ken-ichi Anjyo,et al.  Tour into the picture: using a spidery mesh interface to make animation from a single image , 1997, SIGGRAPH.

[6]  Erik Reinhard,et al.  Real-time color blending of rendered and captured video , 2004 .

[7]  Frédo Durand,et al.  A gentle introduction to bilateral filtering and its applications , 2007, SIGGRAPH Courses.

[8]  Allan G. Rempel,et al.  Ldr2Hdr: on-the-fly reverse tone mapping of legacy video and photographs , 2007, ACM Trans. Graph..

[9]  H. Shum,et al.  Radiometric calibration from a single image , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[10]  Hany Farid,et al.  Blind inverse gamma correction , 2001, IEEE Trans. Image Process..

[11]  Alexei A. Efros,et al.  Image quilting for texture synthesis and transfer , 2001, SIGGRAPH.

[12]  E. Adelson,et al.  The Plenoptic Function and the Elements of Early Vision , 1991 .

[13]  Paul E. Debevec,et al.  Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography , 1998, SIGGRAPH '08.

[14]  Greg Ward,et al.  High dynamic range imaging , 2004, SIGGRAPH '04.

[15]  Eero P. Simoncelli,et al.  A Parametric Texture Model Based on Joint Statistics of Complex Wavelet Coefficients , 2000, International Journal of Computer Vision.

[16]  Alexei A. Efros,et al.  Photo clip art , 2007, ACM Trans. Graph..

[17]  Shree K. Nayar,et al.  What is the space of camera response functions? , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[18]  Heiko Hecht,et al.  Naive optics: Predicting and perceiving reflections in mirrors. , 2003, Journal of experimental psychology. Human perception and performance.

[19]  Erik Reinhard,et al.  Image-based material editing , 2005, SIGGRAPH '05.