Recovering high dynamic range radiance maps from photographs

We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.

[1]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.

[2]  Christine D. Piatko,et al.  A Visibility Matching Tone Reproduction Operator for High Dynamic Range Scenes , 1997, IEEE Trans. Vis. Comput. Graph..

[3]  Shenchang Eric Chen,et al.  QuickTime VR: an image-based approach to virtual environment navigation , 1995, SIGGRAPH.

[4]  N. Mott,et al.  The Theory of the Photographic Process , 1944, Nature.

[5]  John E. Kaufman IES lighting handbook : the standard lighting guide , 1966 .

[6]  Steve Mann,et al.  Being `undigital' with digital cameras: extending dynamic range by combining differently exposed pictures , 1994 .

[7]  Gregory J. Ward,et al.  The RADIANCE lighting simulation and rendering system , 1994, SIGGRAPH.

[8]  Christophe Schlick,et al.  Quantization Techniques for Visualization of High Dynamic Range Pictures , 1995 .

[9]  Yoshiaki Shirai,et al.  Three-Dimensional Computer Vision , 1987, Symbolic Computation.

[10]  Jitendra Malik,et al.  Modeling and Rendering Architecture from Photographs: A hybrid geometry- and image-based approach , 1996, SIGGRAPH.

[11]  Leonard McMillan,et al.  Plenoptic modeling: an image-based rendering system , 1995, SIGGRAPH.

[12]  Y. J. Tejwani,et al.  Robot vision , 1989, IEEE International Symposium on Circuits and Systems,.

[13]  B. C. Madden,et al.  Extended Intensity Range Imaging , 1993 .

[14]  Olivier D. Faugeras,et al.  3-D scene representation as a collection of images , 1994, Proceedings of 12th International Conference on Pattern Recognition.

[15]  Donald P. Greenberg,et al.  A model of visual adaptation for realistic image synthesis , 1996, SIGGRAPH.

[16]  O. Faugeras Three-dimensional computer vision: a geometric viewpoint , 1993 .

[17]  A. Theuwissen Solid-State Imaging with Charge-Coupled Devices , 1995 .

[18]  Gregory J. Ward,et al.  Measuring and modeling anisotropic reflection , 1992, SIGGRAPH.

[19]  Holly E. Rushmeier,et al.  Tone reproduction for realistic images , 1993, IEEE Computer Graphics and Applications.

[20]  Pat Hanrahan,et al.  A realistic camera model for computer graphics , 1995, SIGGRAPH.

[21]  Christine D. Piatko,et al.  A visibility matching tone reproduction operator for high dynamic range scenes , 1997, SIGGRAPH '97.

[22]  Richard Szeliski,et al.  Image mosaicing for tele-reality applications , 1994, Proceedings of 1994 IEEE Workshop on Applications of Computer Vision.

[23]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.