A generic neural network for multi-modal sensorimotor learning

A generic neural network module has been developed, which learns to combine multi-modal sensory information to produce adequate motor commands. The module learns in the 3rst step to combine multi-modal sensory information, based on which it subsequently learns to control a kinematic arm. The module can learn to combine two sensory inputs whatever their modality. We report the architecture and learning strategy of the module and characterize its performance by simulations in two situations of reaching by a linear arm with multiple degrees of freedom: (1) mapping of tactile and arm-related proprioceptive information, and (2) mapping of gaze and arm-related proprioceptive information. c © 2004 Elsevier B.V. All rights reserved.