Optimal In-Place Learning and the Lobe Component Analysis

It is difficult to map many existing learning algorithms onto biological networks because the former require a separate learning network. The computational basis of biological cortical learning is still poorly understood. This paper rigorously introduces a concept called in-place learning. With in-place learning, every networked neuron in-place is responsible for the learning of its signal processing characteristics (e.g., efficacies of synapses) within its connected network environment. There is no need for a separate learning network. With this in-place hypothesis, consequently, each neuron does not have extra space to compute and store the second and higher order statistics (e.g., correlations) of its input fibers. This work first provides a classification of learning algorithms. Then, it shows that the two well-known in-place biological mechanisms, the Hebbian rule and lateral inhibition, are sufficient to develop orientation selective cells, similar to those found in VI, from inputs of natural images. Many other cells that have not been fully understood have emerged as well. The presented computational study of these two in-place learning mechanisms leads to a new concept- these cells correspond to what are called lobe components, which are high concentrations in the probability of the neuronal input space. Further analysis explains how every neuron can learn efficiently (i.e., near-optimal efficiency) by scheduling its plasticity while interacting with other connected neurons. A simple, in-place (Type-5) learning algorithm is presented. The experimental results showed that this simple biologically inspired algorithm is superior to some well-known state-of-the-art ICA algorithms, thanks to its near-optimal efficiency.

[1]  A. Grinvald,et al.  Spatial Relationships among Three Columnar Systems in Cat Area 17 , 1997, The Journal of Neuroscience.

[2]  Joseph J. Atick,et al.  Convergent Algorithm for Sensory Receptive Field Development , 1993, Neural Computation.

[3]  D. J. Felleman,et al.  Progression of change following median nerve section in the cortical representation of the hand in areas 3b and 1 in adult owl and squirrel monkeys , 1983, Neuroscience.

[4]  Peter Dayan,et al.  Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems , 2001 .

[5]  Teuvo Kohonen,et al.  Self-Organizing Maps , 2010 .

[6]  Pierre Comon,et al.  Independent component analysis, A new concept? , 1994, Signal Process..

[7]  M. Meister,et al.  Dynamic predictive coding by the retina , 2005, Nature.

[8]  E. L. Lehmann,et al.  Theory of point estimation , 1950 .

[9]  Joseph J. Atick,et al.  Towards a Theory of Early Visual Processing , 1990, Neural Computation.

[10]  T. Wiesel,et al.  Receptive field dynamics in adult primary visual cortex , 1992, Nature.

[11]  Juha Karhunen,et al.  Blind source separation using least-squares type adaptive algorithms , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[12]  Xiaoqin Wang,et al.  Remodelling of hand representation in adult cortex determined by timing of tactile stimulation , 1995, Nature.

[13]  Terrence J. Sejnowski,et al.  The “independent components” of natural scenes are edge filters , 1997, Vision Research.

[14]  D. Hubel,et al.  Receptive fields, binocular interaction and functional architecture in the cat's visual cortex , 1962, The Journal of physiology.

[15]  P. Bickel,et al.  Mathematical Statistics: Basic Ideas and Selected Topics , 1977 .

[16]  R. F.,et al.  Mathematical Statistics , 1944, Nature.

[17]  M. Alexander,et al.  Principles of Neural Science , 1981 .

[18]  James L. McClelland,et al.  Autonomous Mental Development by Robots and Animals , 2001, Science.

[19]  Mriganka Sur Maps of time and space , 1995, Nature.

[20]  Thomas S. Huang,et al.  Motion and Structure from Image Sequences , 1992 .

[21]  P. Comon Independent Component Analysis , 1992 .

[22]  Juyang Weng,et al.  Candid Covariance-Free Incremental Principal Component Analysis , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[23]  P. Lennie Receptive fields , 2003, Current Biology.

[24]  Terrence J. Sejnowski,et al.  Independent Component Analysis Using an Extended Infomax Algorithm for Mixed Subgaussian and Supergaussian Sources , 1999, Neural Computation.

[25]  Anders Krogh,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.

[26]  Te-Won Lee,et al.  Independent Component Analysis , 1998, Springer US.

[27]  Pierre Comon Independent component analysis - a new concept? signal processing , 1994 .

[28]  Mriganka Sur,et al.  PLASTICITY OF ORIENTATION PROCESSING IN ADULT VISUAL CORTEX , 2004 .

[29]  D. Chakrabarti,et al.  A fast fixed - point algorithm for independent component analysis , 1997 .

[30]  Jeff W. Lichtman,et al.  Long-term synapse loss induced by focal blockade of postsynaptlc receptors , 1994, Nature.

[31]  G. F. Cooper,et al.  Development of the Brain depends on the Visual Environment , 1970, Nature.