Motion estimation through efficient matching of a reduced number of reliable singular points

Motion estimation in video sequences is a classical intensive computational task that is required for a wide range of applications. Many different methods have been proposed to reduce the computational complexity, but the achieved reduction is not enough to allow real time operation in a non-specialized hardware. In this paper an efficient selection of singular points for fast matching between consecutive images is presented, which allows to achieve real time operation. The selection of singular points lies in finding the image points that are robust to the noise and the aperture problem. This is accomplished by imposing restrictions related to the gradient magnitude and the cornerness. The neighborhood of each singular point is characterized by a complex descriptor vector, which presents a high robustness to illumination changes and small variations in the 3D camera viewpoint. The matching between singular points of consecutive images is performed by maximizing a similarity measure based on the previous descriptor vector. The set of correspondences yields a sparse motion vector field that accurately outlines the image motion. In order to demonstrate the efficiency of this approach, a video stabilization application has been developed, which uses the sparse motion vector field as input. Excellent results have been efficiency of the proposed motion estimation technique.

[1]  David G. Lowe,et al.  Local feature view clustering for 3D object recognition , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[2]  Charles V. Stewart,et al.  Robust Computer Vision: An Interdisciplinary Challenge , 2000, Comput. Vis. Image Underst..

[3]  S. Ullman,et al.  The interpretation of visual motion , 1977 .

[4]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[5]  Charles V. Stewart,et al.  Robust Parameter Estimation in Computer Vision , 1999, SIAM Rev..

[6]  H. Spies,et al.  Accurate optical flow in noisy image sequences , 2001, Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001.

[7]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[8]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[9]  Steven S. Beauchemin,et al.  The computation of optical flow , 1995, CSUR.

[10]  Paul L. Rosin Edges: saliency measures and automatic thresholding , 1997, Machine Vision and Applications.

[11]  Emanuele Trucco,et al.  Making good features track better , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[12]  David J. Fleet,et al.  On optical flow , 1995 .

[13]  Federico Tombari,et al.  Efficient and optimal block matching for motion estimation , 2007, 14th International Conference on Image Analysis and Processing (ICIAP 2007).

[14]  Cedric Nishan Canagarajah,et al.  Reduced complexity motion estimation techniques: review and comparative study , 2003, 10th IEEE International Conference on Electronics, Circuits and Systems, 2003. ICECS 2003. Proceedings of the 2003.

[15]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[16]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.