Autonomous navigation of a mobile robot using inertial and visual cues

This paper describes the development and implementation of a reactive visual module utilized on an autonomous mobile robot to automatically correct in trajectory. The authors use a multisensorial mechanism based on inertial and visual cues. The authors report only on the implementation and the experimentation of this module, whereas the main theoretical aspects have been developed elsewhere.

[1]  Olivier Faugeras,et al.  Computation of inertial information on a Robot , 1991 .

[2]  Oussama Khatib,et al.  Real-Time Obstacle Avoidance for Manipulators and Mobile Robots , 1986 .

[3]  Chih-Ming Wang Location Estimation and Uncertainty Analysis for Mobile Robots , 1990, Autonomous Robot Vehicles.

[4]  Rachid Deriche,et al.  Dense Depth Recovery From Stereo Images , 1992, European Conference on Artificial Intelligence.

[5]  Olivier D. Faugeras,et al.  Curve-based stereo: figural continuity and curvature , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[6]  Rachid Deriche,et al.  3D vision on the parallel machine CAPITAN , 1989, International Workshop on Industrial Applications of Machine Intelligence and Vision,.

[7]  Thierry Viéville,et al.  Computation of ego-motion and structure from visual and inertial sensors using the vertical cue , 1993, 1993 (4th) International Conference on Computer Vision.

[8]  Takeo Kanade,et al.  A Stereo Matching Algorithm with an Adaptive Window: Theory and Experiment , 1994, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  S. Das,et al.  Integrated binocular and motion stereo in a inertial navigation sensor-based mobile vehicle , 1992, Proceedings of the 1992 IEEE International Symposium on Intelligent Control.

[10]  Rachid Deriche,et al.  Recovering 3D motion and structure from stereo and 2D token tracking cooperation , 1990, [1990] Proceedings Third International Conference on Computer Vision.