Active refocusing of images and videos

We present a system for refocusing images and videos of dynamic scenes using a novel, single-view depth estimation method. Our method for obtaining depth is based on the defocus of a sparse set of dots projected onto the scene. In contrast to other active illumination techniques, the projected pattern of dots can be removed from each captured image and its brightness easily controlled in order to avoid under- or over-exposure. The depths corresponding to the projected dots and a color segmentation of the image are used to compute an approximate depth map of the scene with clean region boundaries. The depth map is used to refocus the acquired image after the dots are removed, simulating realistic depth of field effects. Experiments on a wide variety of scenes, including close-ups and live action, demonstrate the effectiveness of our method.

[1]  Dorin Comaniciu,et al.  Mean Shift: A Robust Approach Toward Feature Space Analysis , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Shree K. Nayar,et al.  Projection defocus analysis for scene capture and image display , 2006, SIGGRAPH 2006.

[3]  Anil K. Jain,et al.  Unsupervised Learning of Finite Mixture Models , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[4]  Joaquim Salvi,et al.  Pattern codification strategies in structured light systems , 2004, Pattern Recognit..

[5]  P. Hanrahan,et al.  Light Field Photography with a Hand-held Plenoptic Camera , 2005 .

[6]  Alexei A. Efros,et al.  Automatic photo pop-up , 2005, SIGGRAPH 2005.

[7]  Murali Subbarao,et al.  Depth from defocus: A spatial domain approach , 1994, International Journal of Computer Vision.

[8]  Michael Potmesil,et al.  A lens and aperture camera model for synthetic image generation , 1981, SIGGRAPH '81.

[9]  Y. J. Tejwani,et al.  Robot vision , 1989, IEEE International Symposium on Circuits and Systems,.

[10]  Frédo Durand,et al.  Defocus video matting , 2005, SIGGRAPH 2005.

[11]  Michael F. Cohen,et al.  An iterative optimization approach for unified image segmentation and matting , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[12]  Paul Haeberli A Multifocus Method for Controlling Depth of Field , 2005 .

[13]  Peter J. Burt,et al.  Enhanced image capture through fusion , 1993, 1993 (4th) International Conference on Computer Vision.

[14]  Narendra Ahuja,et al.  Panoramic image acquisition , 1996, Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[15]  Luc Van Gool,et al.  One-shot active 3D image capture , 1997, Electronic Imaging.

[16]  Murali Subbarao,et al.  Focused image recovery from two defocused images recorded with different camera settings , 1995, IEEE Transactions on Image Processing.

[17]  Przemyslaw Rokita,et al.  Generating depth of-field effects in virtual reality applications , 1996, IEEE Computer Graphics and Applications.

[18]  Alex Pentland,et al.  A New Sense for Depth of Field , 1985, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  David Salesin,et al.  Interactive digital photomontage , 2004, SIGGRAPH 2004.

[20]  Subhasis Chaudhuri,et al.  An MRF Model-Based Approach to Simultaneous Recovery of Depth and Restoration from Defocused Images , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[21]  Shree K. Nayar,et al.  Shape from Focus , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[22]  Shree K. Nayar,et al.  Real-Time Focus Range Sensor , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[23]  Naoki Asada,et al.  Edge and Depth from Focus , 2004, International Journal of Computer Vision.

[24]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[25]  Leonard McMillan,et al.  Dynamically reparameterized light fields , 2000, SIGGRAPH.

[26]  Marc Levoy,et al.  Synthetic aperture confocal imaging , 2004, SIGGRAPH 2004.

[27]  Robert L. Cook,et al.  Distributed ray tracing , 1998 .

[28]  David Salesin,et al.  Spatio-angular resolution tradeoffs in integral photography , 2006, EGSR '06.