Developmental learning of integrating visual attention shifts and bimanual object grasping and manipulation tasks

In order to achieve visual-guided object manipulation tasks via learning by example, the current neuro-robotics study considers integration of two essential mechanisms of visual attention and arm/hand movement and their adaptive coordination. The present study proposes a new dynamic neural network model in which visual attention and motor behavior are associated with task specific manners by learning with self-organizing functional hierarchy required for the cognitive tasks. The top-down visual attention provides a goal-directed shift sequence in a visual scan path and it can guide a generation of a motor plan for hand movement during action by reinforcement and inhibition learning. The proposed model can automatically generate the corresponding goal-directed actions with regards to the current sensory states including visual stimuli and body postures. The experiments show that developmental learning from basic actions to combinational ones can achieve certain generalizations in learning by which some novel behaviors without prior learning can be successfully generated.

[1]  Jun Tani,et al.  Development of hierarchical structures for actions and motor imagery: a constructivist view from synthetic neuro-robotics study , 2009, Psychological research.

[2]  R. Johansson,et al.  Eye–hand coordination in a sequential target contact task , 2009, Experimental Brain Research.

[3]  D. Ballard,et al.  Goal-directed arm movements change eye-head coordination , 1996, Experimental Brain Research.

[4]  J D Crawford,et al.  Spatial transformations for eye-hand coordination. , 2004, Journal of neurophysiology.

[5]  J. Sage,et al.  The interaction of visual and proprioceptive inputs in pointing to actual and remembered targets in Parkinson’s disease , 2001, Neuroscience.

[6]  K. Doya,et al.  Memorizing oscillatory patterns in the analog neuron network , 1989, International 1989 Joint Conference on Neural Networks.

[7]  M. Corbetta,et al.  Control of goal-directed and stimulus-driven attention in the brain , 2002, Nature Reviews Neuroscience.

[8]  Maja J. Matarić,et al.  Learning to Use Selective Attention and Short-Term Memory in Sequential Tasks , 1996 .

[9]  Christof Koch,et al.  A Model of Saliency-Based Visual Attention for Rapid Scene Analysis , 2009 .

[10]  Jun Tani,et al.  Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment , 2008, PLoS Comput. Biol..

[11]  D. Ballard,et al.  Memory Representations in Natural Tasks , 1995, Journal of Cognitive Neuroscience.

[12]  Howard Poizner,et al.  The interaction of visual and proprioceptive inputs in pointing to actual and remembered targets , 2004, Experimental Brain Research.

[13]  J Saarinen,et al.  Self-Organized Formation of Colour Maps in a Model Cortex , 1985, Perception.

[14]  C. Prablanc,et al.  Neural control of on-line guidance of hand reaching movements. , 2003, Progress in brain research.

[15]  Minho Lee,et al.  Stereo saliency map considering affective factors and selective motion analysis in a dynamic environment , 2008, Neural Networks.

[16]  Minho Lee,et al.  Saliency map model with adaptive masking based on independent component analysis , 2002, Neurocomputing.

[17]  Minho Lee,et al.  Biologically motivated vergence control system using human-like selective attention model , 2006, Neurocomputing.

[18]  J. D. Crawford,et al.  Spatial Transformations for Eye–Hand Coordination , 2004 .

[19]  D H Ballard,et al.  Hand-eye coordination during sequential tasks. , 1992, Philosophical transactions of the Royal Society of London. Series B, Biological sciences.

[20]  Teuvo Kohonen,et al.  Self-Organizing Maps , 2010 .

[21]  Ronald J. Williams,et al.  A Learning Algorithm for Continually Running Fully Recurrent Neural Networks , 1989, Neural Computation.

[22]  R. Johansson,et al.  Eye–Hand Coordination in Object Manipulation , 2001, The Journal of Neuroscience.

[23]  Takashi Hanakawa,et al.  Connectivity and signal intensity in the parieto-occipital cortex predicts top-down attentional effect in visual masking: An fMRI study based on individual differences , 2009, NeuroImage.

[24]  Marty G Woldorff,et al.  Timing and Sequence of Brain Activity in Top-Down Control of Visual-Spatial Attention , 2007, PLoS biology.