Learning visuo-tactile coordination in robotic systems

As it occurs in humans, robotic systems should be able to respond to unexpected tactile events by orienting their visual attention toward the location of the stimuli. This implies two basic problems: 1) it is necessary to develop a general method for integrating attentive processes which belong to different sensory modalities according to the attended task; 2) for the specific case of touch-driven shift of gaze, a sensory motor transformation needs to be identified, which links the stimulation of tactile receptors to the spatial position of the camera, via the current posture of the system. In this paper we describe a general framework for integrating multimodal attentive mechanisms, and we show how the visuo-tactile coordination can be autonomously learnt on the basis of sensory consistency and feedback. After the general presentation of the method, we consider the case of a robotic system composed of a 2-DOF arm and a 2-DOF head. Experiments with this system show that it discovers its own functional model without any external intervention and adapts it continuously during normal operation. The approach gives good results while presenting the advantages of autonomy and adaptability.