A Spherical Model Based Keypoint Descriptor and Matching Algorithm for Omnidirectional Images

Omnidirectional images generally have nonlinear distortion in radial direction. Unfortunately, traditional algorithms such as scale-invariant feature transform (SIFT) and Descriptor-Nets (D-Nets) do not work well in matching omnidirectional images just because they are incapable of dealing with the distortion. In order to solve this problem, a new voting algorithm is proposed based on the spherical model and the D-Nets algorithm. Because the spherical-based keypoint descriptor contains the distortion information of omnidirectional images, the proposed matching algorithm is invariant to distortion. Keypoint matching experiments are performed on three pairs of omnidirectional images, and comparison is made among the proposed algorithm, the SIFT and the D-Nets. The result shows that the proposed algorithm is more robust and more precise than the SIFT, and the D-Nets in matching omnidirectional images. Comparing with the SIFT and the D-Nets, the proposed algorithm has two main advantages: (a) there are more real matching keypoints; (b) the coverage range of the matching keypoints is wider, including the seriously distorted areas.

[1]  Mei Chen,et al.  Food recognition using statistics of pairwise local features , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[2]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[3]  Abed Malti,et al.  Feature detection and matching in images with radial distortion , 2010, 2010 IEEE International Conference on Robotics and Automation.

[4]  A. Makadia,et al.  Image processing in catadioptric planes: spatiotemporal derivatives and optical flow computation , 2002, Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in conjunction with ECCV'02.

[5]  Yassine Ruichek,et al.  3D Reconstruction of Urban Environments Based on Fisheye Stereovision , 2012, 2012 Eighth International Conference on Signal Image Technology and Internet Based Systems.

[6]  S. Thibault,et al.  Fisheye lens calibration using virtual grid. , 2013, Applied optics.

[7]  Rahul Sukthankar,et al.  D-Nets: Beyond patch-based image descriptors , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[8]  David W. Murray,et al.  Towards simultaneous recognition, localization and mapping for hand-held and wearable cameras , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[9]  Luis Puig,et al.  Scale space for central catadioptric systems: Towards a generic camera feature extractor , 2011, 2011 International Conference on Computer Vision.

[10]  João Pedro Barreto,et al.  sRD-SIFT: Keypoint Detection and Matching in Images With Radial Distortion , 2012, IEEE Transactions on Robotics.

[11]  Juho Kannala,et al.  A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  T. Bulow Spherical diffusion for 3D surface smoothing , 2002, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  Tomás Svoboda,et al.  Epipolar Geometry for Central Catadioptric Cameras , 2002, International Journal of Computer Vision.

[14]  Peter I. Corke,et al.  Wide-angle Visual Feature Matching for Outdoor Localization , 2010, Int. J. Robotics Res..

[15]  Andrew W. Fitzgibbon,et al.  Simultaneous linear estimation of multiple view geometry and lens distortion , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[16]  Kenichi Kanatani,et al.  Calibration of Ultrawide Fisheye Lens Cameras by Eigenvalue Minimization , 2013, IEEE Transactions on Pattern Analysis and Machine Intelligence.