Non-linear data structure extraction using simple hebbian networks

We present a class a neural networks algorithms based on simple hebbian learning which allow the finding of higher order structure in data. The neural networks use negative feedback of activation to selforganise; such networks have previously been shown to be capable of performing principal component analysis (PCA). In this paper, this is extended to exploratory projection pursuit (EPP), which is a statistical method for investigating structure in high-dimensional data sets. As opposed to previous proposals for networks which learn using hebbian learning, no explicit weight normalisation, decay or weight clipping is required. The results are extended to multiple units and related to both the statistical literature on EPP and the neural network literature on non-linear PCA.

[1]  J. Friedman Exploratory Projection Pursuit , 1987 .

[2]  J. Karhunen,et al.  Learning of sinusoidal frequencies by nonlinear constrained Hebbian algorithms , 1992, Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop.

[3]  J. Karhunen,et al.  Nonlinear generalizations of principal component learning algorithms , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).

[4]  T. Sanger,et al.  Analysis of the two-dimensional receptive fields learned by the Generalized Hebbian Algorithm in response to random input , 1990, Biological Cybernetics.

[5]  Erkki Oja,et al.  Principal component analysis by homogeneous neural networks, part II: Analysis and extentions of the learning algorithm , 1992 .

[6]  Stephen Warwick Looney,et al.  A comparison of tests for multivariate normality that are based on measures of multivariate skewness and kurtosis , 1992 .

[7]  E. Oja Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.

[8]  Juha Karhunen,et al.  Stability of Oja's PCA Subspace Rule , 1994, Neural Computation.

[9]  Adam Prügel-Bennett,et al.  Unsupervised Hebbian Learning and the Shape of the Neuron Activation Function , 1993 .

[10]  Kurt Hornik,et al.  Neural networks and principal component analysis: Learning from examples without local minima , 1989, Neural Networks.

[11]  Juha Karhunen,et al.  Representation and separation of signals using nonlinear PCA type learning , 1994, Neural Networks.

[12]  D. Freedman,et al.  Asymptotics of Graphical Projection Pursuit , 1984 .

[13]  J. Karhunen,et al.  Nonlinear Hebbian Algorithms for Sinusoidal Frequency Estimation , 1992 .

[14]  Jerry M. Mendel A prelude to neural networks: adaptive and learning systems , 1994 .

[15]  Colin Fyfe PCA Properties of Interneurons , 1993 .

[16]  N. L. Johnson,et al.  Multivariate Analysis , 1958, Nature.

[17]  Erkki Oja,et al.  Neural Networks, Principal Components, and Subspaces , 1989, Int. J. Neural Syst..

[18]  Erkki Oja,et al.  Nonlinear PCA: Algorithms and Applications , 1993 .

[19]  Robin Sibson,et al.  What is projection pursuit , 1987 .

[20]  Juha Karhunen,et al.  Learning of robust principal component subspace , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).

[21]  Garrison W. Cottrell,et al.  Non-Linear Dimensionality Reduction , 1992, NIPS.