Computing eigenvectors and corresponding eigenvalues with largest or smallest modulus of real antisymmetric matrix based on neural network with less scale

In this paper, we extend the neural network based approaches, which can asymptotically compute the largest or smallest eigenvalues and the corresponding eigenvectors of real symmetric matrix, to the real antisymmetric matrix case. Given any n-by-n real antisymmetric matrix, unlike the previous neural network based methods that were summarized by some ordinary differential equations (ODEs) with 2n dimension, our proposed method can be represented by some n dimensional ODEs, which can much reduce the scale of networks and achieve higher computing performance. Simulations verify the computational capability of our proposed method.

[1]  Fa-Long Luo,et al.  A principal component analysis algorithm with invariant norm , 1995, Neurocomputing.

[2]  佐藤 保,et al.  Principal Components , 2021, Encyclopedic Dictionary of Archaeology.

[3]  Yiguang Liu,et al.  A concise functional neural network computing the largest modulus eigenvalues and their corresponding eigenvectors of a real skew matrix , 2006, Theor. Comput. Sci..

[4]  Yiguang Liu,et al.  A simple functional neural network for computing the largest and smallest eigenvalues and corresponding eigenvectors of a real symmetric matrix , 2005, Neurocomputing.

[5]  Y. Xia,et al.  Further Results on Global Convergence and Stability of Globally Projected Dynamical Systems , 2004 .

[6]  Erkki Oja,et al.  Modified Hebbian learning for curve and surface fitting , 1992, Neural Networks.

[7]  D. Signorini,et al.  Neural networks , 1995, The Lancet.

[8]  Li Yanda,et al.  Real-time neural computation of the eigenvector corresponding to the largest eigenvalue of positive matrix , 1995 .

[9]  E. Oja,et al.  On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix , 1985 .

[10]  J. Príncipe,et al.  An RLS type algorithm for generalized eigendecomposition , 2001, Neural Networks for Signal Processing XI: Proceedings of the 2001 IEEE Signal Processing Society Workshop (IEEE Cat. No.01TH8584).

[11]  Qingfu Zhang,et al.  A class of learning algorithms for principal component analysis and minor component analysis , 2000, IEEE Trans. Neural Networks Learn. Syst..

[12]  Yiguang Liu,et al.  A functional neural network for computing the largest modulus eigenvalues and their corresponding eigenvectors of an anti-symmetric matrix , 2005, Neurocomputing.

[13]  Juha Karhunen,et al.  Representation and separation of signals using nonlinear PCA type learning , 1994, Neural Networks.

[14]  Yan Fu,et al.  Neural networks based approach for computing eigenvectors and eigenvalues of symmetric matrix , 2004 .

[15]  Michael D. Zoltowski,et al.  Self-organizing algorithms for generalized eigen-decomposition , 1997, IEEE Trans. Neural Networks.

[16]  Erkki Oja,et al.  Neural Networks, Principal Components, and Subspaces , 1989, Int. J. Neural Syst..

[17]  Erkki Oja,et al.  Principal components, minor components, and linear neural networks , 1992, Neural Networks.

[18]  Youshen Xia,et al.  An Extended Projection Neural Network for Constrained Optimization , 2004, Neural Computation.