A model to image straight line matching method for vision-based indoor mobile robot self-location

An efficient and simple method for matching image features to a model is presented It is designed to indoor mobile robot self-location. It is a two stage method based on the interpretation tree search approach using straight line correspondences. In the first stage a set of matching hypothesis is generated. Exploiting the specificity of the mobile robotics context, the global interpretation tree is divided into two sub-trees and then two geometric constraints are defined directly on 2D-3D correspondences in order to improve the pruning and search efficiency. In the second stage, the pose is calculated for each matching hypothesis and the best one is selected according to a defined error function. Test results illustrate the performances of the approach.

[1]  David G. Lowe,et al.  Three-Dimensional Object Recognition from Single Two-Dimensional Images , 1987, Artif. Intell..

[2]  Olivier D. Faugeras,et al.  HYPER: A New Approach for the Recognition and Positioning of Two-Dimensional Objects , 1986, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  Margrit Betke,et al.  Mobile robot localization using landmarks , 1997, IEEE Trans. Robotics Autom..

[4]  W. Eric L. Grimson,et al.  On the Verification of Hypothesized Matches in Model-Based Recognition , 1991, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  D. W. Murray,et al.  Using the orientation of fragmentary 3D edge segments for polyhedral object recognition , 1988, International Journal of Computer Vision.

[6]  Wen-Hsiang Tsai,et al.  Reliable Determination of Object Pose from Line Features by Hypothesis Testing , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[7]  X. Pennec Toward a generic framework for recognition based on uncertain geometric features , 1998 .

[8]  Michel Dhome,et al.  Determination of the Attitude of 3D Objects from a Single Perspective View , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Pai-Shih Lee,et al.  Model-based location of automated guided vehicles in the navigation sessions by 3d computer vision , 1994, J. Field Robotics.

[10]  Olivier D. Faugeras,et al.  Determination of camera location from 2D to 3D line and point correspondences , 1988, Proceedings CVPR '88: The Computer Society Conference on Computer Vision and Pattern Recognition.

[11]  Hobart R. Everett,et al.  Mobile robot positioning: Sensors and techniques , 1997, J. Field Robotics.

[12]  David W. Murray,et al.  Strategies in object recognition , 1988 .

[13]  Avinash C. Kak,et al.  Vision-based navigation by a mobile robot with obstacle avoidance using single-camera vision and ultrasonic sensing , 1998, IEEE Trans. Robotics Autom..

[14]  Yehezkel Lamdan,et al.  Geometric Hashing: A General And Efficient Model-based Recognition Scheme , 1988, [1988 Proceedings] Second International Conference on Computer Vision.

[15]  Hobart R. Everett,et al.  Mobile Robot Positioning - Sensors and Techniques , 1997 .

[16]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[17]  Olivier D. Faugeras,et al.  Determination of Camera Location from 2-D to 3-D Line and Point Correspondences , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  W. Eric L. Grimson,et al.  On the Sensitivity of the Hough Transform for Object Recognition , 1990, IEEE Trans. Pattern Anal. Mach. Intell..

[19]  W. Eric L. Grimson,et al.  Localizing Overlapping Parts by Searching the Interpretation Tree , 1987, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  Jake K. Aggarwal,et al.  Mobile robot self-location using model-image feature correspondence , 1996, IEEE Trans. Robotics Autom..

[21]  Thomas J. Olson,et al.  View-invariant regions and mobile robot self-localization , 1996, IEEE Trans. Robotics Autom..