Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis

Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.

[1]  R. Macnab,et al.  The gradient-sensing mechanism in bacterial chemotaxis. , 1972, Proceedings of the National Academy of Sciences of the United States of America.

[2]  Gert Cauwenberghs,et al.  A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization , 1992, NIPS.

[3]  M. Yanofsky,et al.  Function and evolution of the plant MADS-box gene family , 2001, Nature Reviews Genetics.

[4]  A. Barto,et al.  Models of the cerebellum and motor learning , 1996 .

[5]  D. Marr A theory of cerebellar cortex , 1969, The Journal of physiology.

[6]  Xiaohui Xie,et al.  Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks , 2003, Neural Computation.

[7]  Jiping He,et al.  A novel model of motor learning capable of developing an optimal movement control law online from scratch , 2004, Biological Cybernetics.

[8]  Joseph E LeDoux,et al.  Structural plasticity and memory , 2004, Nature Reviews Neuroscience.

[9]  Masao Ito The Cerebellum And Neural Control , 1984 .

[10]  Yury P. Shimansky,et al.  Principles of organization of neural systems controlling automatic movements in animals , 1992, Progress in Neurobiology.

[11]  Michael I. Jordan,et al.  A more biologically plausible learning rule for neural networks. , 1991, Proceedings of the National Academy of Sciences of the United States of America.

[12]  J. Albus A Theory of Cerebellar Function , 1971 .

[13]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[14]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[15]  William J Kargo,et al.  Improvements in the Signal-to-Noise Ratio of Motor Cortex Cells Distinguish Early versus Late Phases of Motor Skill Learning , 2004, The Journal of Neuroscience.

[16]  P. Werbos,et al.  Beyond Regression : "New Tools for Prediction and Analysis in the Behavioral Sciences , 1974 .

[17]  P. Haydon Glia: listening and talking to the synapse , 2001, Nature Reviews Neuroscience.

[18]  C. D. Gelatt,et al.  Optimization by Simulated Annealing , 1983, Science.