Visual homing: a purely appearance-based approach

This paper presents an algorithm that uses visual input to perform homing for an autonomous mobile robot. An image captured at the target pose (position and orientation) is compared with the currently viewed image to determine the parameters of the next move of the robot toward the target (rotation and translation). The visual data is captured using an omnidirectional camera and images are compared using the Manhattan distance function to determine both the translation and rotation angle. Results, limitations and successes are presented and discussed.

[1]  François Chaumette,et al.  Visual Servoing Based on Image Motion , 2001, Int. J. Robotics Res..

[2]  T. Collett,et al.  Multiple stored views and landmark guidance in ants , 1998, Nature.

[3]  José Santos-Victor,et al.  Vision-based navigation and environmental representations with an omnidirectional camera , 2000, IEEE Trans. Robotics Autom..

[4]  Frédéric Labrosse,et al.  Rotation-invariant appearance based maps for robot navigation using an artificial immune network algorithm , 2004, Proceedings of the 2004 Congress on Evolutionary Computation (IEEE Cat. No.04TH8753).

[5]  José Santos-Victor,et al.  Visual servoing and appearance for navigation , 2000, Robotics Auton. Syst..

[6]  Giulio Sandini,et al.  Visual Behaviors for Docking , 1997, Comput. Vis. Image Underst..

[7]  Alessandro Rizzi,et al.  Unsupervised matching of visual landmarks for robotic homing using Fourier-Mellin transform , 2002, Robotics Auton. Syst..

[8]  Mark D. Fairchild,et al.  Color Appearance Models , 1997, Computer Vision, A Reference Guide.

[9]  Frédéric Labrosse Visual compass , 1965 .

[10]  Michael H. Brill,et al.  Color appearance models , 1998 .

[11]  Alessandro Rizzi,et al.  A bee-inspired visual homing using color images , 1998, Robotics Auton. Syst..