SVD Algorithms: APEX-like versus Subspace Methods

We compare several new SVD learning algorithms which are based on the subspace method in principal component analysis with the APEX-like algorithm proposed by Diamantaras. It is shown experimentally that the convergence of these algorithms is as fast as the convergence of the APEX-like algorithm.

[1]  Konstantinos Ioannis Diamantaras Principal component learning networks and applications , 1992 .

[2]  Sun-Yuan Kung,et al.  A neural network learning algorithm for adaptive principal component extraction (APEX) , 1990, International Conference on Acoustics, Speech, and Signal Processing.

[3]  R. Brockett Dynamical systems that sort lists, diagonalize matrices, and solve linear programming problems , 1991 .

[4]  John B. Moore,et al.  Global analysis of Oja's flow for neural networks , 1994, IEEE Trans. Neural Networks.

[5]  Sun-Yuan Kung,et al.  Principal Component Neural Networks: Theory and Applications , 1996 .

[6]  Terence D. Sanger Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples , 1993, NIPS.

[7]  Juha Karhunen,et al.  Principal component neural networks — Theory and applications , 1998, Pattern Analysis and Applications.

[8]  Gene H. Golub,et al.  Matrix computations , 1983 .

[9]  Mark D. Plumbley Lyapunov functions for convergence of principal component algorithms , 1995, Neural Networks.

[10]  Erkki Oja,et al.  Neural Networks, Principal Components, and Subspaces , 1989, Int. J. Neural Syst..

[11]  E. Oja Simplified neuron model as a principal component analyzer , 1982, Journal of mathematical biology.

[12]  Kurt Hornik,et al.  Learning in linear neural networks: a survey , 1995, IEEE Trans. Neural Networks.

[13]  Jim Kay,et al.  Feature discovery under contextual supervision using mutual information , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.