Mobile Robot Localization through Identifying Spatial Relations from Detected Corners

In this paper, the Harris corner detection algorithm is applied to images captured by a time-of-flight (ToF) camera. In this case, the ToF camera mounted on a mobile robot is exploited as a gray-scale camera for localization purposes. Indeed, the gray-scale image represents distances for the purpose of finding good features to be tracked. These features, which actually are points in the space, form the basis of the spatial relations used in the localization algorithm. The approach to the localization problem is based on the computation of the spatial relations existing among the corners detected. The current spatial relations are matched with the relations gotten during previous navigation.

[1]  Rasmus Larsen,et al.  Improved 3D reconstruction in smart-room environments using ToF imaging , 2010, Comput. Vis. Image Underst..

[2]  C Tomasi,et al.  Shape and motion from image streams: a factorization method. , 1992, Proceedings of the National Academy of Sciences of the United States of America.

[3]  Jing Li,et al.  A comprehensive review of current local features for computer vision , 2008, Neurocomputing.

[4]  Joachim Hertzberg,et al.  Three-dimensional mapping with time-of-flight cameras , 2009 .

[5]  G. Bennett Probability Inequalities for the Sum of Independent Random Variables , 1962 .

[6]  Cang Ye,et al.  Extraction of planar features from Swissranger SR-3000 Range Images by a clustering method using Normalized Cuts , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[8]  Antonio Fernández Caballero,et al.  Stereovision depth analysis by two-dimensional motion charge memories , 2007 .

[9]  David Suter,et al.  Assessing the performance of corner detectors for point feature tracking applications , 2004, Image Vis. Comput..

[10]  Michael Bosse,et al.  Keypoint design and evaluation for place recognition in 2D lidar maps , 2009, Robotics Auton. Syst..

[11]  Jan Böhm Orientation of Image Sequences in a Point-based Environment Model , 2007, Sixth International Conference on 3-D Digital Imaging and Modeling (3DIM 2007).

[12]  Antonio Fernández-Caballero,et al.  Modelling the Stereovision-Correspondence-Analysis task by Lateral Inhibition in Accumulative Computation problem-solving method , 2007, Expert Syst. Appl..

[13]  Hong Zhang,et al.  Quantitative Evaluation of Feature Extractors for Visual SLAM , 2007, Fourth Canadian Conference on Computer and Robot Vision (CRV '07).

[14]  Antonio Fernández-Caballero,et al.  Algorithmic lateral inhibition method in dynamic and selective visual attention task: Application to moving objects detection and labelling , 2006, Expert Syst. Appl..

[15]  David W. Murray,et al.  Simultaneous Localization and Map-Building Using Active Vision , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[16]  Markus Vincze,et al.  Selecting good corners for structure and motion recovery using a time-of-flight camera , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Roland Siegwart,et al.  A state-of-the-art 3D sensor for robot navigation , 2004 .

[18]  George K. I. Mann,et al.  Integrated fuzzy logic and genetic algorithmic approach for simultaneous localization and mapping of mobile robots , 2008, Appl. Soft Comput..