Structure from motion blur in low light

In theory, the precision of structure from motion estimation is known to increase as camera motion increases. In practice, larger camera motions induce motion blur, particularly in low light where longer exposures are needed. If the camera center moves during exposure, the trajectory traces in a motion-blurred image encode the underlying 3D structure of points and the motion of the camera. In this paper, we propose an algorithm to explicitly estimate the 3D structure of point light sources and camera motion from a motion-blurred image in a low light scene with point light sources. The algorithm identifies extremal points of the traces mapped out by the point sources in the image and classifies them into start and end sets. Each trace is charted out incrementally using local curvature, providing correspondences between start and end points. We use these correspondences to obtain an initial estimate of the epipolar geometry embedded in a motion-blurred image. The reconstruction and the 2D traces are used to estimate the motion of the camera during the interval of capture, and multiple view bundle adjustment is applied to refine the estimates.

[1]  Huei-Yung Lin,et al.  Depth Recovery from Motion Blurred Images , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[2]  Frédo Durand,et al.  Understanding and evaluating blind deconvolution algorithms , 2009, CVPR.

[3]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[4]  Aggelos K. Katsaggelos,et al.  Tracking Motion-Blurred Targets in Video , 2006, 2006 International Conference on Image Processing.

[5]  William T. Freeman,et al.  Removing camera shake from a single photograph , 2006, SIGGRAPH 2006.

[6]  A. N. Rajagopalan,et al.  Inferring Image Transformation and Structure from Motion-Blurred Images , 2010, BMVC.

[7]  Ramesh Raskar,et al.  Coded exposure photography: motion deblurring using fluttered shutter , 2006, SIGGRAPH 2006.

[8]  S. Shankar Sastry,et al.  An Invitation to 3-D Vision , 2004 .

[9]  Joel S. Fox Range from translational motion blurring , 1988, Proceedings CVPR '88: The Computer Society Conference on Computer Vision and Pattern Recognition.

[10]  Hui Ma,et al.  Image Deblurring with Blurred / Noisy Image Pairs , 2013 .

[11]  Yasuyuki Matsushita,et al.  Removing Non-Uniform Motion Blur from Images , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[12]  Jian-Feng Cai,et al.  Blind motion deblurring from a single image using sparse approximation , 2009, CVPR.

[13]  Ioannis M. Rekleitis Optical flow recognition from the power spectrum of a single blurred image , 1996, Proceedings of 3rd IEEE International Conference on Image Processing.

[14]  Larry D. Hostetler,et al.  The estimation of the gradient of a density function, with applications in pattern recognition , 1975, IEEE Trans. Inf. Theory.

[15]  Ying Wu,et al.  Motion from blur , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[16]  Jan Flusser,et al.  Space-Variant Restoration of Images Degraded by Camera Motion Blur , 2008, IEEE Transactions on Image Processing.

[17]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[18]  Shree K. Nayar,et al.  Motion-based motion deblurring , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[19]  Gerhard Tutz,et al.  Local principal curves , 2005, Stat. Comput..

[20]  Jiaya Jia,et al.  High-quality motion deblurring from a single image , 2008, ACM Trans. Graph..