Nonlinear generalizations of principal component learning algorithms

In this paper, we introduce and study nonlinear generalizations of several neural algorithms that learn the principal eigenvectors of the data covariance matrix. We first consider the robust versions that optimize a nonquadratic criterion under orthonormality constraints. As an important byproduct, Sanger's GHA and Oja's SGA algorithms for learning principal components are derived from a natural optimization problem. We also introduce a fully nonlinear generalization that has signal separation capabilities not possessed by standard principal component analysis learning algorithms.

[1]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.

[2]  Juha Karhunen,et al.  Tracking of sinusoidal frequencies by neural network learning algorithms , 1991, [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing.

[3]  Terence D. Sanger Optimal Hidden Units for Two-Layer nonlinear Feedforward Neural Networks , 1991, Int. J. Pattern Recognit. Artif. Intell..

[4]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.

[5]  Juha Karhunen,et al.  Learning of robust principal component subspace , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).

[6]  Juha Karhunen,et al.  Representation and separation of signals using nonlinear PCA type learning , 1994, Neural Networks.

[7]  R. Palmer,et al.  Introduction to the theory of neural computation , 1994, The advanced book program.