Monocular robot navigation using invariant natural features

In this paper, we present an approach for monocular robot navigation based on natural landmarks. The scale invariant feature transform (SIFT) features are used to structure landmarks because they are invariant to image scale, rotation and translation. During learning phase, the algorithm selects the most visually salient natural landmarks in work environment on certain position. These natural landmarks are described by SITF features and save to database. When given a scene of environment during robot navigation phase, the sift-based landmark recognition algorithm is used to find corresponsive object amongst the database. If the corresponsive landmark found, they are tracked by Kanade-Lucas-Tomasi (KLT) tracker over times in order to get the relative pose information between robot and landmarks. To get more accurate relative pose information, least squares matching (LSM) method is used to get the sub-pixel matching result. The experiments results show that this approach can obtain quite accurate relative position information about natural landmark for robot navigation.

[1]  Wei Zhang,et al.  Image Based Localization in Urban Environments , 2006, Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06).

[2]  Hong Bingrong,et al.  Novel Method for Monocular Vision Based Mobile Robot Localization , 2006, 2006 International Conference on Computational Intelligence and Security.

[3]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[4]  James J. Little,et al.  Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks , 2002, Int. J. Robotics Res..

[5]  Philippe Martinet,et al.  Indoor Navigation of a Wheeled Mobile Robot along Visual Routes , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[6]  David G. Lowe,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004, International Journal of Computer Vision.

[7]  Frédéric Lerasle,et al.  A visual landmark framework for indoor mobile robot navigation , 2002, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292).

[8]  Qing Xu,et al.  Multi-sensor Satellite Image Sub-pixel Registration , 2007, Fourth International Conference on Image and Graphics (ICIG 2007).

[9]  Andrea Lagorio,et al.  On the Use of SIFT Features for Face Authentication , 2006, 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'06).

[10]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[11]  Tom Duckett,et al.  Localization for Mobile Robots using Panoramic Vision, Local Features and Particle Filter , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[12]  Wu Bo,et al.  Application of Adaptive Kernel Matching Pursuit to Estimate Mixture Pixel Proportion , 2007, Fourth International Conference on Image and Graphics (ICIG 2007).

[13]  Heinz Hügli,et al.  Robot self-localization using visual attention , 2005, 2005 International Symposium on Computational Intelligence in Robotics and Automation.

[14]  Luis Moreno,et al.  Landmark perception planning for mobile robot localization , 1998, Proceedings. 1998 IEEE International Conference on Robotics and Automation (Cat. No.98CH36146).

[15]  Dirk Stallmann,et al.  SPOT stereo matching for digital terrain model generation , 1993 .

[16]  Hye-Jin Kim,et al.  AR-KLT based Hand Tracking , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.