Computational Cameras: Approaches, Benefits and Limits

A computational camera uses a combination of optics and software to produce images that cannot be taken with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras have been demonstrated-some designed to achieve new imaging functionalities, and others to reduce the complexity of traditional imaging. In this article, we describe how computational cameras have evolved and present a taxonomy for the technical approaches they use. We explore the benefits and limits of computational imaging, and discuss how it is related to the adjacent and overlapping fields of digital imaging, computational photography and computational image sensors. Over the last century, the evolution of the camera has been truly remarkable. However, through this evolution the basic model underlying the camera has remained essentially the same, namely, the camera obscura (Figure 1(a)). The traditional camera has a detector and a standard lens which captures only those principal rays that pass through its center of projection, or effective pinhole, to produce the familiar linear perspective image. In other words, the traditional camera performs a very simple and restrictive sampling of the complete set of rays, or the light field, that resides in any real scene. A computational camera (Figure 1(b)) uses a combination of novel optics and computations to produce the final image [Nayar 2006a]. The novel optics is used to map rays in the light field of the scene to pixels on the detector in some unconventional fashion. For instance, the ray shown in Figure 1(b) has been geometrically redirected by the optics to a different pixel from the one it would have arrived at in the case of a traditional camera. As illustrated by the change in color from yellow to red, the ray could also be photometrically altered by the optics. In all cases, the captured image is optically coded and may not be meaningful in its raw form. The computational module has a model of the optics, which it uses to decode the captured image to produce a new type of image that could benefit a vision system. The vision system could either be a human observing the image or a computer vision system that uses the image to interpret the scene it represents. 3 Figure 1: (a) The traditional camera model, which is based on the camera obscura. (b) A computational camera uses optical coding followed …

[1]  Moshe Ben-Ezra,et al.  High resolution large format tile-scan camera: Design, calibration, and extended depth of field , 2010, 2010 IEEE International Conference on Computational Photography (ICCP).

[2]  Narendra Ahuja,et al.  High dynamic range panoramic imaging , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[3]  Song Zhang,et al.  Ultrafast 3-D shape measurement with an off-the-shelf DLP projector. , 2010, Optics express.

[4]  Terrance E. Boult,et al.  Constraining Object Features Using a Polarization Reflectance Model , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  David J. Brady,et al.  Compressive imaging spectrometers using coded apertures , 2006, SPIE Defense + Commercial Sensing.

[6]  Narendra Ahuja,et al.  Panoramic image acquisition , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[7]  Shree K. Nayar,et al.  Multiview radial catadioptric imaging for scene capture , 2006, SIGGRAPH 2006.

[8]  Shree K. Nayar,et al.  Adaptive dynamic range imaging: optical control of pixel exposures over space and time , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[9]  Kiriakos N. Kutulakos,et al.  A Theory of Refractive and Specular 3D Shape by Light-Path Triangulation , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[10]  P. Hanrahan,et al.  Light Field Photography with a Hand-held Plenoptic Camera , 2005 .

[11]  Marc Levoy,et al.  High performance imaging using large camera arrays , 2005, ACM Trans. Graph..

[12]  Shree K. Nayar,et al.  Multispectral Imaging Using Multiplexed Illumination , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[13]  Takeo Kanade,et al.  Virtualized Reality: Constructing Virtual Worlds from Real Scenes , 1997, IEEE Multim..

[14]  Marc Levoy,et al.  The Frankencamera: an experimental platform for computational photography , 2010, ACM Trans. Graph..

[15]  Matthew O'Toole,et al.  Optical computing for fast light transport analysis , 2010, ACM Trans. Graph..

[16]  Yoav Y. Schechner,et al.  Clear underwater vision , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[17]  Ruzena Bajcsy,et al.  Catadioptric sensors that approximate wide-angle perspective projections , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[18]  Ramesh Raskar,et al.  Removing photography artifacts using gradient projection and flash-exposure sampling , 2005, SIGGRAPH 2005.

[19]  Tomoo Mitsunaga,et al.  Coded rolling shutter photography: Flexible space-time sampling , 2010, 2010 IEEE International Conference on Computational Photography (ICCP).

[20]  Marc Levoy,et al.  Veiling glare in high dynamic range imaging , 2007, ACM Trans. Graph..

[21]  Stephen Lin,et al.  A Prism-Mask System for Multispectral Video Acquisition. , 2011, IEEE transactions on pattern analysis and machine intelligence.

[22]  Saburo Tsuji,et al.  Panoramic representation for route recognition by a mobile robot , 1992, International Journal of Computer Vision.

[23]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[24]  Takeo Kanade,et al.  Virtual ized reality: constructing time-varying virtual worlds from real world events , 1997 .

[25]  Narendra Ahuja,et al.  Split Aperture Imaging for High Dynamic Range , 2004, International Journal of Computer Vision.

[26]  Ramesh Raskar,et al.  Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing , 2007, SIGGRAPH 2007.

[27]  Ramesh Raskar,et al.  Coded exposure photography: motion deblurring using fluttered shutter , 2006, SIGGRAPH '06.

[28]  Shree K. Nayar,et al.  A Theory of Single-Viewpoint Catadioptric Image Formation , 1999, International Journal of Computer Vision.

[29]  Hae-Seung Lee,et al.  Analog VLSI systems for image acquisition and fast early vision processing , 1992, International Journal of Computer Vision.

[30]  Shree K. Nayar,et al.  A general imaging model and a method for finding its parameters , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[31]  Robert J. Woodham,et al.  Photometric method for determining surface orientation from multiple images , 1980 .

[32]  Eero P. Simoncelli,et al.  Range estimation by optical differentiation. , 1998, Journal of the Optical Society of America. A, Optics, image science, and vision.

[33]  Ramesh Raskar,et al.  Coded exposure photography: motion deblurring using fluttered shutter , 2006, SIGGRAPH 2006.

[34]  Steven M. Seitz,et al.  The Space of All Stereo Images , 2004, International Journal of Computer Vision.

[35]  Michael F. Cohen,et al.  Digital photography with flash and no-flash image pairs , 2004, ACM Trans. Graph..

[36]  Yael Pritch,et al.  Omnistereo: Panoramic Stereo Imaging , 2001, IEEE Trans. Pattern Anal. Mach. Intell..

[37]  Ali Adibi,et al.  Multimodal multiplex spectroscopy using photonic crystals. , 2003, Optics express.

[38]  Shmuel Peleg,et al.  Mosaicing with Generalized Strips , 1998 .

[39]  Shree K. Nayar,et al.  Video super-resolution using controlled subpixel detector shifts , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[40]  Frédéric Guichard,et al.  Extended depth-of-field using sharpness transport across color channels , 2009, Electronic Imaging.

[41]  Shree K. Nayar,et al.  Programmable Imaging: Towards a Flexible Camera , 2006, International Journal of Computer Vision.

[42]  Alex Pentland,et al.  A New Sense for Depth of Field , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[43]  Frédo Durand,et al.  4D frequency analysis of computational cameras for depth of field extension , 2009, SIGGRAPH '09.

[44]  David Salesin,et al.  Spatio-angular resolution tradeoffs in integral photography , 2006, EGSR '06.

[45]  Marc Levoy,et al.  Synthetic aperture confocal imaging , 2004, ACM Trans. Graph..

[46]  Ravindra Athale,et al.  Flexible multimodal camera using a light field architecture , 2009, 2009 IEEE International Conference on Computational Photography (ICCP).

[47]  David J. Kriegman,et al.  Helmholtz Stereopsis: Exploiting Reciprocity for Surface Reconstruction , 2002, International Journal of Computer Vision.

[48]  Ashok Veeraraghavan,et al.  Flexible Voxels for Motion-Aware Videography , 2010, ECCV.

[49]  Richard Szeliski,et al.  Creating full view panoramic image mosaics and environment maps , 1997, SIGGRAPH.

[50]  Marc Levoy,et al.  Dual photography , 2005, SIGGRAPH 2005.

[51]  Shree K. Nayar,et al.  Scene Collages and Flexible Camera Arrays , 2007, Rendering Techniques.

[52]  Yoav Y. Schechner,et al.  Clear underwater vision , 2004, CVPR 2004.

[53]  Ramesh Raskar,et al.  Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging , 2004, SIGGRAPH 2004.

[54]  Narendra Ahuja,et al.  Range estimation from focus using a non-frontal imaging camera , 1993, International Journal of Computer Vision.

[55]  Yasushi Yagi,et al.  Omnidirectional imaging with hyperboloidal projection , 1993, Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93).

[56]  Murali Subbarao,et al.  Focused image recovery from two defocused images recorded with different camera settings , 1995, IEEE Transactions on Image Processing.

[57]  Marc Levoy,et al.  Veiling glare in high dynamic range imaging , 2007, SIGGRAPH 2007.

[58]  Richard Szeliski,et al.  Digital photography with flash and no-flash image pairs , 2004, ACM Trans. Graph..

[59]  Daphna Weinshall,et al.  Mosaicing New Views: The Crossed-Slits Projection , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[60]  Ramesh Raskar,et al.  Looking around the corner using transient imaging , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[61]  Frédo Durand,et al.  Flash photography enhancement via intrinsic relighting , 2004, SIGGRAPH 2004.

[62]  Herbert E. Ives,et al.  Parallax Panoramagrams Made with a Large Diameter Lens , 1930 .

[63]  Marc Levoy,et al.  The Frankencamera: an experimental platform for computational photography , 2010, SIGGRAPH 2010.

[64]  Ramesh Raskar,et al.  Reinterpretable Imager: Towards Variable Post‐Capture Space, Angle and Time Resolution in Photography , 2010, Comput. Graph. Forum.

[65]  Harry Shum,et al.  Rendering with concentric mosaics , 1999, SIGGRAPH.

[66]  Rajiv Gupta,et al.  Linear Pushbroom Cameras , 1994, ECCV.

[67]  Marc Levoy,et al.  Symmetric photography: exploiting data-sparseness in reflectance fields , 2006, EGSR '06.

[68]  Ramesh Raskar,et al.  Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing , 2007, ACM Trans. Graph..

[69]  H Harashima,et al.  3-D computer graphics based on integral photography. , 2001, Optics express.

[70]  Frédo Durand,et al.  Image and depth from a conventional camera with a coded aperture , 2007, ACM Trans. Graph..

[71]  Andrew Gardner,et al.  Performance relighting and reflectance transformation with time-multiplexed illumination , 2005, ACM Trans. Graph..

[72]  Pieter Peers,et al.  Rapid Acquisition of Specular and Diffuse Normal Maps from Polarized Spherical Gradient Illumination , 2007 .

[73]  Shree K. Nayar,et al.  A theory of multiplexed illumination , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[74]  Shree K. Nayar,et al.  Depth from Diffusion , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[75]  Shree K. Nayar,et al.  360 x 360 Mosaics , 2000, Computer Vision and Pattern Recognition.

[76]  M. Srinivasan,et al.  Reflective surfaces for panoramic imaging. , 1997, Applied optics.

[77]  Stephen Lin,et al.  Coded aperture pairs for depth from defocus , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[78]  Shree K. Nayar,et al.  Separation of Reflection Components Using Color and Polarization , 1997, International Journal of Computer Vision.

[79]  Li Zhang,et al.  Projection defocus analysis for scene capture and image display , 2006, ACM Trans. Graph..

[80]  G. Häusler,et al.  A method to increase the depth of focus by two step image processing , 1972 .

[81]  Kiriakos N. Kutulakos,et al.  Confocal Stereo , 2006, International Journal of Computer Vision.

[82]  Joaquim Salvi,et al.  Pattern codification strategies in structured light systems , 2004, Pattern Recognit..

[83]  Tomoyuki Nishita,et al.  Extracting depth and matte using a color-filtered aperture , 2008, SIGGRAPH Asia '08.

[84]  Li Zhang,et al.  Spacetime stereo: shape recovery for dynamic scenes , 2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings..

[85]  Shree K. Nayar,et al.  Rectified Catadioptric Stereo Sensors , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[86]  Marc Levoy Experimental Platforms for Computational Photography , 2010, IEEE Computer Graphics and Applications.

[87]  W. Cathey,et al.  Extended depth of field through wave-front coding. , 1995, Applied optics.

[88]  Murali Subbarao,et al.  Focused image recovery from two defocused images recorded with different camera settings , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[89]  S. Nayar,et al.  Diffusion coded photography for extended depth of field , 2010, SIGGRAPH 2010.

[90]  Yuandong Tian,et al.  (De) focusing on global light transport for active scene recovery , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[91]  T. M. Cannon,et al.  Coded aperture imaging with uniformly redundant arrays. , 1978, Applied optics.

[92]  Berthold K. P. Horn,et al.  Density reconstruction using arbitrary ray-sampling schemes , 1978, Proceedings of the IEEE.

[93]  Carver Mead,et al.  Analog VLSI and neural systems , 1989 .

[94]  D. Brady Optical Imaging and Spectroscopy , 2009 .

[95]  Ramesh Raskar,et al.  Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging , 2004 .

[96]  Shree K. Nayar,et al.  360/spl times/360 mosaics , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[97]  Yaron Caspi,et al.  Under the supervision of , 2003 .

[98]  Ramesh Raskar,et al.  Fast separation of direct and global components of a scene using high frequency illumination , 2006, SIGGRAPH 2006.

[99]  H. Damasio,et al.  IEEE Transactions on Pattern Analysis and Machine Intelligence: Special Issue on Perceptual Organization in Computer Vision , 1998 .

[100]  F. Durand,et al.  Flash photography enhancement via intrinsic relighting , 2004, ACM Trans. Graph..

[101]  S. Nayar,et al.  What are good apertures for defocus deblurring? , 2009, 2009 IEEE International Conference on Computational Photography (ICCP).

[102]  Kiriakos N. Kutulakos,et al.  A theory of inverse light transport , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[103]  Ramesh Raskar,et al.  Coded Strobing Photography: Compressive Sensing of High Speed Periodic Videos , 2011, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[104]  Marc Levoy,et al.  High performance imaging using large camera arrays , 2005, SIGGRAPH 2005.

[105]  Takeo Kanade,et al.  A multiple-baseline stereo , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[106]  Shmuel Peleg,et al.  Stereo panorama with a single camera , 1999, Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149).

[107]  Predrag Milojkovic,et al.  ACTIVE-EYES: an adaptive pixel-by-pixel image-segmentation sensor architecture for high-dynamic-range hyperspectral imaging. , 2002, Applied optics.

[108]  Shree K. Nayar,et al.  Caustics of catadioptric cameras , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[109]  Edward H. Adelson,et al.  Single Lens Stereo with a Plenoptic Camera , 1992, IEEE Trans. Pattern Anal. Mach. Intell..

[110]  Paul Debevec Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography , 2008, SIGGRAPH Classes.

[111]  Andrew Gardner,et al.  Performance relighting and reflectance transformation with time-multiplexed illumination , 2005, SIGGRAPH 2005.

[112]  Takeo Kanade,et al.  Virtualized reality: constructing time-varying virtual worlds from real world events , 1997, Proceedings. Visualization '97 (Cat. No. 97CB36155).

[113]  Shree K. Nayar,et al.  Enhancing resolution along multiple imaging dimensions using assorted pixels , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[114]  Takeo Kanade,et al.  A Multiple-Baseline Stereo , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[115]  Oliver Cossairt,et al.  Spectral Focal Sweep: Extended depth of field from chromatic aberrations , 2010, 2010 IEEE International Conference on Computational Photography (ICCP).

[116]  Yoav Y Schechner,et al.  Depth from diffracted rotation. , 2006, Optics letters.

[117]  In-So Kweon,et al.  Single Lens Stereo with a Biprism , 1998, MVA.

[118]  Amit Ashok,et al.  Pseudorandom phase masks for superresolution imaging from subpixel shifting. , 2007, Applied optics.

[119]  Steven M. Seitz,et al.  Multiperspective Imaging , 2003, IEEE Computer Graphics and Applications.

[120]  Shree K. Nayar,et al.  Computational Cameras: Redefining the Image , 2006, Computer.

[121]  Kiriakos N. Kutulakos,et al.  Optical computing for fast light transport analysis , 2010, SIGGRAPH 2010.

[122]  Peter Kohl,et al.  Temporal Pixel Multiplexing for simultaneous high-speed high-resolution imaging , 2010, Nature Methods.

[123]  John Hart,et al.  ACM Transactions on Graphics , 2004, SIGGRAPH 2004.

[124]  Marc Levoy,et al.  Synthetic aperture confocal imaging , 2004, SIGGRAPH 2004.

[125]  Chia-Kai Liang,et al.  Programmable aperture photography: multiplexed light field acquisition , 2008, SIGGRAPH 2008.

[126]  Shree K. Nayar,et al.  Transactions on Pattern Analysis and Machine Intelligence Flexible Depth of Field Photography 1 Depth of Field , 2022 .

[127]  M. Gustafsson Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy , 2000, Journal of microscopy.

[128]  Shuntaro Yamazaki,et al.  Temporal Dithering of Illumination for Fast Active Vision , 2008, ECCV.

[129]  Shree K. Nayar,et al.  Rectified catadioptric stereo sensors , 2000, Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662).

[130]  Shree K. Nayar,et al.  Multiview radial catadioptric imaging for scene capture , 2006, ACM Trans. Graph..

[131]  Shree K. Nayar,et al.  Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum , 2010, IEEE Transactions on Image Processing.

[132]  G Indebetouw,et al.  Imaging with Fresnel zone pupil masks: extended depth of field. , 1984, Applied optics.

[133]  M. Gustafsson Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. , 2005, Proceedings of the National Academy of Sciences of the United States of America.

[134]  Shmuel Peleg,et al.  Panoramic mosaics by manifold projection , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.