Hebbian learning of visually directed reaching by a robot arm

We describe a robotic system consisting of an arm and an active vision system learns to align its sensory and motor maps so that it can successfully reach the tip of its arm to touch the point where it is looking. This system uses an unsupervised Hebbian learning algorithm, and learns the alignment by watching its arm waving in front of its eyes. After watching for 25 minutes, the maps are sufficiently well aligned that it can execute the desired behavior.

[1]  Helge J. Ritter,et al.  Three-dimensional neural net for learning visuomotor coordination of a robot arm , 1990, IEEE Trans. Neural Networks.

[2]  D. Caligiore,et al.  Toward an integrated biomimetic model of reaching , 2007, 2007 IEEE 6th International Conference on Development and Learning.

[3]  Klaus Schulten,et al.  Biological visuo-motor control of a pneumatic robot arm , 1995 .

[4]  Pattie Maes,et al.  Self-Taught Visually-Guided Pointing for a Humanoid Robot , 1996 .

[5]  M Kuperstein,et al.  Neural model of adaptive hand-eye coordination for single postures. , 1988, Science.

[6]  Bertram E. Shi,et al.  The HKUST Multimap System for Active Vision , 2009, Int. J. Humanoid Robotics.

[7]  L F Abbott,et al.  Transfer of coded information from sensory to motor networks , 1995, The Journal of neuroscience : the official journal of the Society for Neuroscience.