Self Body Mapping in Mobile Robots Using Vision and Forward Models

The work presented in this paper aims at providing an agent with basic capabilities leading towards navigation through self body-mapping using a vision system. In particular, we are interested in the study of forward models which code for the sensory consequences of an agent's self-produced actions. The research is developed in the framework of cognitive robotics and embodied cognition. The agent is a robot we let interact with its environment to know the free space around it from re-enaction of sensory-motor cycles predicting collisions from visual data. From the disparity map the robot associates intensity regions with motor commands in order to predict distances to objects reported by a tactile sensor in self-motion coordinates. In order to form this multimodal association we use a forward model, coded as a system of neural networks and trained with data coming from random trajectories executed by a Pioneer 3-XD. The resulting forward model allows the agent to navigate avoiding undesired situations by performing long term predictions of the sensory consequences of its actions. The experiments validate the hypothesis that this model allows for a basic self body-mapped navigation capability.

[1]  Lawrence W Barsalou,et al.  Simulation, situated conceptualization, and prediction , 2009, Philosophical Transactions of the Royal Society B: Biological Sciences.

[2]  Zoubin Ghahramani,et al.  Computational principles of movement neuroscience , 2000, Nature Neuroscience.

[3]  Bruno Lara,et al.  Prediction of Undesired Situations Based on Multi-Modal Representations , 2007, Electronics, Robotics and Automotive Mechanics Conference (CERMA'06).

[4]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[5]  Michael I. Jordan,et al.  Forward Models: Supervised Learning with a Distal Teacher , 1992, Cogn. Sci..

[6]  Benjamin Kuipers,et al.  A stereo vision based mapping algorithm for detecting inclines, drop-offs, and obstacles for safe local navigation , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[8]  D M Wolpert,et al.  Predicting the Consequences of Our Own Actions: The Role of Sensorimotor Context Estimation , 1998, The Journal of Neuroscience.

[9]  Kurt Konolige,et al.  Small Vision Systems: Hardware and Implementation , 1998 .

[10]  Alain L. Kornhauser,et al.  Stereo Vision for Obstacle Detection in Autonomous Navigation , 2008 .

[11]  Giulio Sandini,et al.  Developmental robotics: a survey , 2003, Connect. Sci..

[12]  Wolfram Schenck,et al.  Bootstrapping Cognition from Behavior - A Computerized Thought Experiment , 2008, Cogn. Sci..

[13]  W. Warren,et al.  Visual guidance of walking through apertures: body-scaled information for affordances. , 1987, Journal of experimental psychology. Human perception and performance.

[14]  K. Doya,et al.  A unifying computational framework for motor control and social interaction. , 2003, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[15]  Heiko Hoffmann,et al.  Perception through visuomotor anticipation in a mobile robot , 2007, Neural Networks.

[16]  D. Wolpert,et al.  Motor prediction , 2001, Current Biology.