A New Modulated Hebbian Learning Rule - Biologically Plausible Method for Local Computation of a Principal Subspace

This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.

[1]  Lei Xu,et al.  Least mean square error reconstruction principle for self-organizing neural-nets , 1993, Neural Networks.

[2]  Zheng Bao,et al.  Robust recursive least squares learning algorithm for principal component analysis , 2000, IEEE Trans. Neural Networks Learn. Syst..

[3]  Tamer Basar,et al.  Analysis of Recursive Stochastic Algorithms , 2001 .

[4]  Qingfu Zhang,et al.  A class of learning algorithms for principal component analysis and minor component analysis , 2000, IEEE Trans. Neural Networks Learn. Syst..

[5]  Kurt Hornik,et al.  Learning in linear neural networks: a survey , 1995, IEEE Trans. Neural Networks.

[6]  S. Amari,et al.  A self-stabilized minor subspace rule , 1998, IEEE Signal Processing Letters.

[7]  K. I. Diamantaras Robust principal component extracting neural networks , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[8]  G. Deco,et al.  An Information-Theoretic Approach to Neural Computing , 1997, Perspectives in Neural Computing.

[9]  E. Oja,et al.  Principal component analysis by homogeneous neural networks, Part I : The weighted subspace criterion , 1992 .

[10]  M. Jankovic,et al.  A new simple /spl infin/OH neuron model as a principal component analyzer , 2001, Canadian Conference on Electrical and Computer Engineering 2001. Conference Proceedings (Cat. No.01TH8555).

[11]  D. Puro The Retina. An Approachable Part of the Brain , 1988 .

[12]  E. Oja Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.

[13]  Erkki Oja,et al.  Principal component analysis by homogeneous neural networks, part II: Analysis and extentions of the learning algorithm , 1992 .

[14]  Vwani P. Roychowdhury,et al.  Algorithms for accelerated convergence of adaptive PCA , 2000, IEEE Trans. Neural Networks Learn. Syst..

[15]  Erkki Oja,et al.  Subspace methods of pattern recognition , 1983 .

[16]  C. Fyfe,et al.  Finding compact and sparse-distributed representations of visual images , 1995 .

[17]  John B. Moore,et al.  Global analysis of Oja's flow for neural networks , 1994, IEEE Trans. Neural Networks.

[18]  Joseph J. Atick,et al.  Convergent Algorithm for Sensory Receptive Field Development , 1993, Neural Computation.

[19]  Wei-Yong Yan,et al.  Global convergence of Oja's subspace algorithm for principal component extraction , 1998, IEEE Trans. Neural Networks.

[20]  E. Oja,et al.  On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix , 1985 .

[21]  George Francis Harpur,et al.  Low Entropy Coding with Unsupervised Neural Networks , 1997 .

[22]  Joseph J. Atick,et al.  What Does the Retina Know about Natural Scenes? , 1992, Neural Computation.

[23]  Kurt Hornik,et al.  Local PCA algorithms , 2000, IEEE Trans. Neural Networks Learn. Syst..

[24]  Joseph J. Atick,et al.  Towards a Theory of Early Visual Processing , 1990, Neural Computation.

[25]  Simone G. O. Fiori,et al.  A Theory for Learning by Weight Flow on Stiefel-Grassman Manifold , 2001, Neural Computation.

[26]  Colin Fyfe,et al.  Non-linear data structure extraction using simple hebbian networks , 1995, Biological Cybernetics.

[27]  Erkki Oja,et al.  Neural Networks, Principal Components, and Subspaces , 1989, Int. J. Neural Syst..

[28]  M. C. GOODALL,et al.  Performance of a Stochastic Net , 1960, Nature.