Learning a synaptic learning rule

Summary form only given, as follows. The authors discuss an original approach to neural modeling based on the idea of searching, with learning methods, for a synaptic learning rule which is biologically plausible and yields networks that are able to learn to perform difficult tasks. The proposed method of automatically finding the learning rule relies on the idea of considering the synaptic modification rule as a parametric function. This function has local inputs and is the same in many neurons. The parameters that define this function can be estimated with known learning methods. For this optimization, particular attention is given to gradient descent and genetic algorithms. In both cases, estimation of this function consists of a joint global optimization of the synaptic modification function and the networks that are learning to perform some tasks. Both network architecture and the learning function can be designed within constraints derived from biological knowledge.<<ETX>>

[1]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[2]  E. Kandel,et al.  A cellular mechanism of classical conditioning in Aplysia: activity-dependent amplification of presynaptic facilitation. , 1983, Science.

[3]  R. Hawkins A cellular mechanism of classical conditioning in Aplysia. , 1984, The Journal of experimental biology.

[4]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[5]  J. Byrne Cellular analysis of associative learning. , 1987, Physiological reviews.

[6]  T. Crow Associative Learning, Memory, and Neuromodulation in Hermissenda , 1989 .

[7]  Barak A. Pearlmutter Learning State Space Trajectories in Recurrent Neural Networks , 1989, Neural Computation.

[8]  Gordon H. Bower,et al.  Computational models of learning in simple neural systems , 1989 .

[9]  Yann LeCun,et al.  Improving the convergence of back-propagation learning with second-order methods , 1989 .

[10]  R. Hawkins A Biologically Based Computational Model for Several Simple Forms of Learning , 1989 .

[11]  Yoshua Bengio,et al.  Programmable execution of multi-layered networks for automatic speech recognition , 1989, CACM.

[12]  L. Darrell Whitley,et al.  Optimizing Neural Networks Using FasterMore Accurate Genetic Search , 1989, ICGA.

[13]  Richard F. Thompson,et al.  Integrating Behavioral and Biological Models of Classical Conditioning , 1989 .

[14]  Geoffrey E. Hinton Connectionist Learning Procedures , 1989, Artif. Intell..

[15]  A. Klopf Classical Conditioning Phenomena Predicted by a Drive-Reinforcement Model of Neuronal Function , 1989 .

[16]  Lawrence Davis,et al.  Mapping neural networks into classifier systems genetic algorithms , 1989 .

[17]  Richard F. Thompson,et al.  Sensorimotor learning and the cerebellum , 1991 .

[18]  Bill Baird,et al.  Learning with Synaptic Nonlinearities in a Coupled Oscillator Model of Olfactory Cortex , 1992 .