Notes on "Recurrent neural network model for computing largest and smallest generalized eigenvalue"

We show some mistakes in the paper ''Recurrent neural network model for computing largest and smallest generalized eigenvalue, Neurocomputing 71 (2008) 3589-3594'' using a counterexample. And another recurrent neural network (RNN) with invariant B-norm is proposed for computing the largest or smallest generalized eigenvalue and the corresponding eigenvector of any symmetric positive pair (A,B), which can be simply extended to compute the second largest or smallest generalized eigenvalue and the corresponding eigenvector based on the similar skills established in other literature. In addition, convergence of such RNN is proven rigorously. Simulation results demonstrate the computational capability of such model.

[1]  Yiguang Liu,et al.  A functional neural network for computing the largest modulus eigenvalues and their corresponding eigenvectors of an anti-symmetric matrix , 2005, Neurocomputing.

[2]  E. Oja,et al.  Principal component analysis by homogeneous neural networks, Part I : The weighted subspace criterion , 1992 .

[3]  Youshen Xia,et al.  An Extended Projection Neural Network for Constrained Optimization , 2004, Neural Computation.

[4]  Tianping Chen,et al.  Modified Oja's Algorithms For Principal Subspace and Minor Subspace extraction , 1997, Neural Processing Letters.

[5]  Yiguang Liu,et al.  A simple functional neural network for computing the largest and smallest eigenvalues and corresponding eigenvectors of a real symmetric matrix , 2005, Neurocomputing.

[6]  Yiguang Liu,et al.  A functional neural network computing some eigenvalues and eigenvectors of a special real matrix , 2005, Neural Networks.

[7]  Erkki Oja,et al.  Subspace methods of pattern recognition , 1983 .

[8]  Qingfu Zhang,et al.  A class of learning algorithms for principal component analysis and minor component analysis , 2000, IEEE Trans. Neural Networks Learn. Syst..

[9]  Michael D. Zoltowski,et al.  Self-organizing algorithms for generalized eigen-decomposition , 1997, IEEE Trans. Neural Networks.

[10]  Kurt Hornik,et al.  Convergence analysis of local feature extraction algorithms , 1992, Neural Networks.

[11]  Erkki Oja,et al.  Modified Hebbian learning for curve and surface fitting , 1992, Neural Networks.

[12]  Wei Wu,et al.  Dynamical System for Computing Largest Generalized Eigenvalue , 2006, ISNN.

[13]  Y. Xia,et al.  Further Results on Global Convergence and Stability of Globally Projected Dynamical Systems , 2004 .

[14]  Jianping Li,et al.  Computing eigenvectors and corresponding eigenvalues with largest or smallest modulus of real antisymmetric matrix based on neural network with less scale , 2010, 2010 The 2nd International Conference on Computer and Automation Engineering (ICCAE).

[15]  Jun Wang,et al.  A recurrent neural network for solving nonlinear convex programs subject to linear constraints , 2005, IEEE Transactions on Neural Networks.

[16]  Shun-ichi Amari,et al.  Unified stabilization approach to principal and minor components extraction algorithms , 2001, Neural Networks.

[17]  Gene H. Golub,et al.  Matrix computations , 1983 .

[18]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.

[19]  Youshen Xia,et al.  A new neural network for solving linear and quadratic programming problems , 1996, IEEE Trans. Neural Networks.

[20]  Erkki Oja,et al.  Principal component analysis by homogeneous neural networks, part II: Analysis and extentions of the learning algorithm , 1992 .

[21]  Yan Fu,et al.  Neural networks based approach for computing eigenvectors and eigenvalues of symmetric matrix , 2004 .

[22]  P. Hartman Ordinary Differential Equations , 1965 .

[23]  Juha Karhunen,et al.  Representation and separation of signals using nonlinear PCA type learning , 1994, Neural Networks.

[24]  J. Príncipe,et al.  An RLS type algorithm for generalized eigendecomposition , 2001, Neural Networks for Signal Processing XI: Proceedings of the 2001 IEEE Signal Processing Society Workshop (IEEE Cat. No.01TH8584).

[25]  John J. Hopfield,et al.  Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit , 1986 .

[26]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.

[27]  E. Oja,et al.  On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix , 1985 .

[28]  Fa-Long Luo,et al.  A principal component analysis algorithm with invariant norm , 1995, Neurocomputing.

[29]  Andrzej Cichocki,et al.  Neural networks for optimization and signal processing , 1993 .

[30]  Erkki Oja,et al.  Neural Networks, Principal Components, and Subspaces , 1989, Int. J. Neural Syst..