Self-organization of head-centered visual responses under ecological training conditions

We have studied the development of head-centered visual responses in an unsupervised self-organizing neural network model which was trained under ecological training conditions. Four independent spatio-temporal characteristics of the training stimuli were explored to investigate the feasibility of the self-organization under more ecological conditions. First, the number of head-centered visual training locations was varied over a broad range. Model performance improved as the number of training locations approached the continuous sampling of head-centered space. Second, the model depended on periods of time where visual targets remained stationary in head-centered space while it performed saccades around the scene, and the severity of this constraint was explored by introducing increasing levels of random eye movement and stimulus dynamics. Model performance was robust over a range of randomization. Third, the model was trained on visual scenes where multiple simultaneous targets where always visible. Model self-organization was successful, despite never being exposed to a visual target in isolation. Fourth, the duration of fixations during training were made stochastic. With suitable changes to the learning rule, it self-organized successfully. These findings suggest that the fundamental learning mechanism upon which the model rests is robust to the many forms of stimulus variability under ecological training conditions.

[1]  C. Galletti,et al.  Eye Position Influence on the Parieto‐occipital Area PO (V6) of the Macaque Monkey , 1995, The European journal of neuroscience.

[2]  Edmund T. Rolls,et al.  Learning transform invariant object recognition in the visual system with multiple stimuli present during training , 2008, Neural Networks.

[3]  L. Fogassi,et al.  Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque , 1990, The Journal of neuroscience : the official journal of the Society for Neuroscience.

[4]  Edmund T. Rolls,et al.  Learning invariant object recognition in the visual system with continuous transformations , 2006, Biological Cybernetics.

[5]  E T Rolls,et al.  Invariant object recognition with trace learning and multiple stimuli present during training , 2007, Network.

[6]  E. Rolls,et al.  Neural networks and brain function , 1998 .

[7]  Peter Dayan,et al.  Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems , 2001 .

[8]  R. Andersen,et al.  Models of the Posterior Parietal Cortex Which Perform Multimodal Integration and Represent Space in Several Coordinate Frames , 2000, Journal of Cognitive Neuroscience.

[9]  Richard A. Andersen,et al.  A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons , 1988, Nature.

[10]  Edmund T. Rolls,et al.  Position invariant recognition in the visual system with cluttered environments , 2000, Neural Networks.

[11]  Simon M. Stringer,et al.  A Model of Self-Organizing Head-Centered Visual Responses in Primate Parietal Areas , 2013, PloS one.

[12]  Gustavo Deco,et al.  Computational neuroscience of vision , 2002 .

[13]  Peter König,et al.  Human eye-head co-ordination in natural exploration , 2007, Network.

[14]  Michael I. Jordan,et al.  A more biologically plausible learning rule for neural networks. , 1991, Proceedings of the National Academy of Sciences of the United States of America.

[15]  D. Sparks,et al.  Eye-head coordination during head-unrestrained gaze shifts in rhesus monkeys. , 1997, Journal of neurophysiology.

[16]  T. Sejnowski,et al.  Spatial Transformations in the Parietal Cortex Using Basis Functions , 1997, Journal of Cognitive Neuroscience.

[17]  Michael W. Spratling Learning Posture Invariant Spatial Representations Through Temporal Correlations , 2009, IEEE Transactions on Autonomous Mental Development.

[18]  Peter Földiák,et al.  Learning Invariance from Transformation Sequences , 1991, Neural Comput..