Joint computation of principal and minor components using gradient dynamical systems over stiefel manifolds

This paper presents several dynamical systems for simultaneous computation of principal and minor subspaces of a symmetric matrix. The proposed methods are derived from optimizing cost functions which are chosen to have optimal values at vectors that are linear combinations of extreme eigenvectors of a given matrix. Necessary optimality conditions are given in terms of a gradient of certain cost functions over a Stiefel manifold. Stability analysis of equilibrium points of six algorithms is established using Liapunov direct method.

[1]  Sabine Van Huffel,et al.  The MCA EXIN neuron for the minor component analysis , 2002, IEEE Trans. Neural Networks.

[2]  M. Hasan,et al.  Diagonally weighted and shifted criteria for minor and principal component extraction , 2005, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005..

[3]  A. Albert Conditions for Positive and Nonnegative Definiteness in Terms of Pseudoinverses , 1969 .

[4]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.

[5]  M. Hasan Dynamical Systems for Joint Principal and Minor Component Analysis , 2006, 2006 American Control Conference.

[6]  Juha Karhunen,et al.  Principal component neural networks — Theory and applications , 1998, Pattern Analysis and Applications.

[7]  Mohammed A. Hasan,et al.  Natural gradient for minor component extraction , 2005, 2005 IEEE International Symposium on Circuits and Systems.

[8]  Alan Edelman,et al.  The Geometry of Algorithms with Orthogonality Constraints , 1998, SIAM J. Matrix Anal. Appl..

[9]  U. Helmke,et al.  Dynamical systems for principal and minor component analysis , 2003, 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475).

[10]  M. Hasan Revisiting Weighted Inverse Rayleigh Quotient for Minor Component Extraction , 2005, 2005 5th International Conference on Information Communications & Signal Processing.

[11]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.

[12]  E. Oja,et al.  On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix , 1985 .

[13]  Lei Xu,et al.  Least mean square error reconstruction principle for self-organizing neural-nets , 1993, Neural Networks.

[14]  L. Mirsky,et al.  The spread of a matrix , 1956 .