Selecting stable image features for robot localization using stereo

To navigate and recognize where it is, a mobile robot must be able to identify its current location. In an unknown initial position, a robot needs to refer to its environment to determine its location in an external coordinate system. Even with a known initial position, drift in odometry causes the estimated position to deviate from the correct position, requiring correction. We show how to find landmarks without models. We use dense stereo data from our mobile robot's trinocular system to discover image regions that will be stable over widely differing viewpoints. We find image brightness "corners" in images and select those that do not straddle depth discontinuities in the stereo depth data. Selecting corners only in regions of nearly planar stereo data results in landmarks that can be seen in images taken from different viewpoints.

[1]  Peter Weckesser,et al.  Multiple sensor processing for high-precision navigation and environmental modeling with a mobile robot , 1995, Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots.

[2]  Roger Y. Tsai,et al.  A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses , 1987, IEEE J. Robotics Autom..

[3]  Steven M. Seitz,et al.  Physically-valid view synthesis by image interpolation , 1995, Proceedings IEEE Workshop on Representation of Visual Scenes (In Conjunction with ICCV'95).

[4]  Don Ray Murray,et al.  Spinoza: a stereoscopic visually guided mobile robot , 1997, Proceedings of the Thirtieth Hawaii International Conference on System Sciences.

[5]  Christopher G. Harris,et al.  3D positional integration from image sequences , 1988, Image Vis. Comput..

[6]  Sebastian Thrun,et al.  Learning Maps for Indoor Mobile Robot Navigation. , 1996 .

[7]  Don Ray Murray,et al.  Stereo vision based mapping and navigation for mobile robots , 1997, Proceedings of International Conference on Robotics and Automation.

[8]  Gene H. Golub,et al.  Total least squares , 1979 .

[9]  C. Tomasi,et al.  Factoring image sequences into shape and motion , 1991, Proceedings of the IEEE Workshop on Visual Motion.

[10]  James J. Little,et al.  Spinoza: A stereoscopic visually guided mobile robot , 1999 .

[11]  Hugh F. Durrant-Whyte,et al.  Simultaneous localisation and map building for autonomous guided vehicles , 1994, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94).

[12]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[14]  Geoffrey E. Hinton,et al.  A Mobile Robot That Learns Its Place , 1997, Neural Computation.

[15]  David G. Lowe,et al.  Three-Dimensional Object Recognition from Single Two-Dimensional Images , 1987, Artif. Intell..

[16]  Ingemar J. Cox,et al.  A multiple-baseline stereo for precise human face acquisition , 1997, Pattern Recognit. Lett..

[17]  Takeo Kanade,et al.  A Multiple-Baseline Stereo , 1993, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  David G. Lowe,et al.  Rigidity checking of 3D point correspondences under perspective projection , 1995, Proceedings of IEEE International Conference on Computer Vision.