A more biologically plausible learning rule than backpropagation applied to a network model of cortical area 7a.

Area 7a of the posterior parietal cortex of the primate brain is concerned with representing head-centered space by combining information about the retinal location of a visual stimulus and the position of the eyes in the orbits. An artificial neural network was previously trained to perform this coordinate transformation task using the backpropagation learning procedure, and units in its middle layer (the hidden units) developed properties very similar to those of area 7a neurons presumed to code for spatial location (Andersen and Zipser, 1988; Zipser and Andersen, 1988). We developed two neural networks with architecture similar to Zipser and Andersen's model and trained them to perform the same task using a more biologically plausible learning procedure than backpropagation. This procedure is a modification of the Associative Reward-Penalty (AR-P) algorithm (Barto and Anandan, 1985), which adjusts connection strengths using a global reinforcement signal and local synaptic information. Our networks learn to perform the task successfully to any degree of accuracy and almost as quickly as with backpropagation, and the hidden units develop response properties very similar to those of area 7a neurons. In particular, the probability of firing of the hidden units in our networks varies with eye position in a roughly planar fashion, and their visual receptive fields are large and have complex surfaces. The synaptic strengths computed by the AR-P algorithm are equivalent to and interchangeable with those computed by backpropagation. Our networks also perform the correct transformation on pairs of eye and retinal positions never encountered before. All of these findings are unaffected by the interposition of an extra layer of units between the hidden and output layers. These results show that the response properties of the hidden units of a layered network trained to perform coordinate transformations, and their similarity with those of area 7a neurons, are not a specific result of backpropagation training. The fact that they can be obtained by a more biologically plausible learning rule corroborates the validity of this neural network's computational algorithm as a plausible model of how area 7a may perform coordinate transformations.

[1]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[2]  Tj Sejnowski,et al.  Skeleton filters in the brain , 2014 .

[3]  J. Movshon,et al.  The statistical reliability of signals in single neurons in cat and monkey visual cortex , 1983, Vision Research.

[4]  Masao Ito The Cerebellum And Neural Control , 1984 .

[5]  P. Anandan,et al.  Pattern-recognizing stochastic learning automata , 1985, IEEE Transactions on Systems, Man, and Cybernetics.

[6]  R. M. Siegel,et al.  Encoding of spatial location by posterior parietal neurons. , 1985, Science.

[7]  A G Barto,et al.  Learning by statistical cooperation of self-interested neuron-like computing elements. , 1985, Human neurobiology.

[8]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[9]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[10]  S. Kelso,et al.  Hebbian synapses in hippocampus. , 1986, Proceedings of the National Academy of Sciences of the United States of America.

[11]  R. Lippmann,et al.  An introduction to computing with neural nets , 1987, IEEE ASSP Magazine.

[12]  V. Gullapalli A Stochastic Algorithm for Learning Real-valued Functions via Reinforcement , 1988 .

[13]  Terrence J. Sejnowski,et al.  NETtalk: a parallel network that learns to read aloud , 1988 .

[14]  R. Andersen,et al.  The role of the posterior parietal cortex in coordinate transformations for visual-motor integration. , 1988, Canadian journal of physiology and pharmacology.

[15]  M. Delong,et al.  Responses of Nucleus Basalis of Meynert Neurons in Behaving Monkeys , 1988 .

[16]  James L. McClelland Explorations In Parallel Distributed Processing , 1988 .

[17]  Garrison W. Cottrell,et al.  Image compression by back-propagation: An example of extensional programming , 1988 .

[18]  Richard A. Andersen,et al.  A back-propagation programmed network that simulates response properties of a subset of posterior parietal neurons , 1988, Nature.

[19]  D. Tolhurst The amount of information transmitted about contrast by neurones in the cat's visual cortex , 1989, Visual Neuroscience.

[20]  T. Sejnowski,et al.  Associative long-term depression in the hippocampus induced by hebbian covariance , 1989, Nature.

[21]  T. Sejnowski,et al.  Induction of synaptic plasticity by Hebbian covariance in the Hippocampus , 1989 .

[22]  R. Andersen Visual and eye movement functions of the posterior parietal cortex. , 1989, Annual review of neuroscience.

[23]  P. M. Shea,et al.  Detection of explosives in checked airline baggage using an artificial neural system , 1989, International 1989 Joint Conference on Neural Networks.

[24]  Geoffrey E. Hinton Connectionist Learning Procedures , 1989, Artif. Intell..

[25]  Ralph Linsker,et al.  How to Generate Ordered Maps by Maximizing the Mutual Information between Input and Output Signals , 1989, Neural Computation.

[26]  Robert Hecht-Nielsen,et al.  Theory of the backpropagation neural network , 1989, International 1989 Joint Conference on Neural Networks.

[27]  R. Andersen,et al.  Microstimulation of a Neural-Network Model for Visually Guided Saccades , 1989, Journal of Cognitive Neuroscience.

[28]  E. W. Kairiss,et al.  Hebbian synapses: biophysical mechanisms and algorithms. , 1990, Annual review of neuroscience.

[29]  Michael I. Jordan,et al.  A more biologically plausible learning rule for neural networks. , 1991, Proceedings of the National Academy of Sciences of the United States of America.

[30]  David Zipser,et al.  The neurobiological significance of the new learning models , 1993 .