Using motor representations for topological mapping and navigation

We propose the use of motor vocabulary, that express a robot's specific motor capabilities, for topological mapbuilding and navigation. First, the motor vocabulary is created automatically through an imitation behaviour where the robot learns about its own motor repertoire, by following a tutor and associating its own motion perception to motor words. The learnt motor representation is then used for building the topological map. The robot is guided through the environment and automatically captures relevant (omnidirectional) images and associates motor words to links between places in the topological map. Finally, the created map is used for navigation, by invoking sequences of motor words that represent the actions for reaching a desired goal. In addition, a reflex-type behaviour based on optical flow extracted from omnidirectional images is used to avoid lateral collisions during navigation. The relation between motor vocabulary and imitation is stressed by the recent findings in neurophysiology, of visuomotor (mirror) neurons that may represent an internal motor representation related to the animal's capacity of imitation. This approach provides a natural adaptation between the robot's motion capabilities, the environment representations (maps) and navigation processes. Encouraging results are presented and discussed.

[1]  M. Arbib,et al.  Language within our grasp , 1998, Trends in Neurosciences.

[2]  G. Rizzolatti,et al.  Visuomotor neurons: ambiguity of the discharge or 'motor' perception? , 2000, International journal of psychophysiology : official journal of the International Organization of Psychophysiology.

[3]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[4]  Maja J. Mataric,et al.  Integration of representation into goal-driven behavior-based robots , 1992, IEEE Trans. Robotics Autom..

[5]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Benjamin Kuipers,et al.  Modeling Spatial Knowledge , 1978, IJCAI.

[7]  Danica Kragic,et al.  A person following behaviour for a mobile robot , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[8]  Patrick Rives,et al.  A new approach to visual servoing in robotics , 1992, IEEE Trans. Robotics Autom..

[9]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[10]  Peter I. Corke,et al.  A tutorial on visual servo control , 1996, IEEE Trans. Robotics Autom..

[11]  José Santos-Victor,et al.  Visual servoing and appearance for navigation , 2000, Robotics Auton. Syst..

[12]  Masayuki Inaba,et al.  Exploration and navigation in corridor environment based on Omni-View sequence , 2000, Proceedings. 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Cat. No.00CH37113).

[13]  Gaurav S. Sukhatme,et al.  Incremental online topological map building with a mobile robot , 1999, Optics East.

[14]  J. Gaspar,et al.  Omni-directional vision for robot navigation , 2000, Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704).

[15]  Andreas Zell,et al.  Detection, tracking, and pursuit of humans with an autonomous mobile robot , 1999, Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289).

[16]  Aude Billard,et al.  Imitation skills as a means to enhance learning of a synthetic proto-language in an autonomous robot , 1999 .