Visual map-less navigation based on homographies

We introduce a method for autonomous robot navigation based on homographies computed between current image and images taken in a previous teaching phase with a monocular vision system. The features used to estimate the homography are vertical lines automatically extracted and matched. From homography, the underlying motion correction between the reference path and the current robot location is computed. The proposed method, which uses a sole calibration parameter, has turned out to be specially useful to correct heading and lateral displacement, which are critical in systems based on odometry. We have tested the proposal in simulation, and with real images. Besides, the visual system has been integrated into an autonomous wheelchair for handicapped, working in real time with robustness.

[1]  Akihisa Ohya,et al.  Autonomous navigation of mobile robot based on teaching and playback using trinocular vision , 2001, IECON'01. 27th Annual Conference of the IEEE Industrial Electronics Society (Cat. No.37243).

[2]  Masayuki Inaba,et al.  Visual navigation using view-sequenced route representation , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[3]  J. M. M. Montiel,et al.  Indoor robot motion based on monocular images , 2001, Robotica.

[4]  Giulio Sandini,et al.  Divergent stereo in autonomous navigation: From bees to robots , 1995, International Journal of Computer Vision.