Integrating visual odometry and dead-reckoning for robot localization and obstacle detection

The research presented introduces a new methodology to infer environment structure and robot localization by using a monocular machine vision system. The local field of view is constrained to the vicinity of the mobile robot in order to accomplish with robust navigation issues. The strategy proposed uses optical flow techniques and planar models to obtain qualitative 3D information and robot localization by using time integration series of acquired frames. In this way, different space resolution and meaningful corner information are considered. The different significant image points are correlated from camera pose knowledge and odometer data. The robot localization is achieved combining on board and visual odometer systems for reducing dead reckoning problems. Therefore, the two system errors are compared in a parallel process that selects the most accurate robot localization. Moreover, it is used to infer qualitative 3D information, when mismatches between both odometer systems are produced, due to the fact that the planar floor model is not accomplished. In this context, experimental results that reinforce the effectivity of the work developed are reported by using the available lab mobile platform. Other remarkable features of the strategy presented are its simplicity and the low computational cost.

[1]  Avinash C. Kak,et al.  Vision for Mobile Robot Navigation: A Survey , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Hakil Kim,et al.  Layered ground floor detection for vision-based mobile robot navigation , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[3]  Michel Dhome,et al.  Monocular Vision for Mobile Robot Localization and Autonomous Navigation , 2007, International Journal of Computer Vision.

[4]  H. Opower Multiple view geometry in computer vision , 2002 .

[5]  Roland Siegwart,et al.  Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles , 2008, IEEE Transactions on Robotics.

[6]  Gerhard Lakemeyer,et al.  Exploring artificial intelligence in the new millennium , 2003 .

[7]  Joachim Weickert,et al.  Combining the Advantages of Local and Global Optic Flow Methods , 2002, DAGM-Symposium.

[8]  Gopal Surya,et al.  Three-dimensional scene recovery from image defocus , 1994 .

[9]  Olivier Faugeras,et al.  Motion and Structure from Motion in a piecewise Planar Environment , 1988, Int. J. Pattern Recognit. Artif. Intell..

[10]  Y. J. Tejwani,et al.  Robot vision , 1989, IEEE International Symposium on Circuits and Systems,.

[11]  Ian D. Reid,et al.  Mapping Large Loops with a Single Hand-Held Camera , 2007, Robotics: Science and Systems.

[12]  Andrew J. Davison,et al.  Real-time simultaneous localisation and mapping with a single camera , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[13]  Ardeshir Goshtasby,et al.  On the Canny edge detector , 2001, Pattern Recognit..

[14]  Yiannis Aloimonos,et al.  Active Egomotion Estimation: A Qualitative Approach , 1992, ECCV.

[15]  W. Marsden I and J , 2012 .

[16]  Larry H. Matthies,et al.  Two years of Visual Odometry on the Mars Exploration Rovers , 2007, J. Field Robotics.

[17]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[18]  A. Guiducci,et al.  Integrating Monocular Vision and Odometry for SLAM , 2003 .

[19]  Sidney Addelman,et al.  trans-Dimethanolbis(1,1,1-trifluoro-5,5-dimethylhexane-2,4-dionato)zinc(II) , 2008, Acta crystallographica. Section E, Structure reports online.

[20]  Sebastian Thrun,et al.  Robotic mapping: a survey , 2003 .