A neurodynamic approach to compute the generalized eigenvalues of symmetric positive matrix pair

Abstract This paper shows that the generalized eigenvalues of a symmetric positive matrix pair can be computed efficiently under more general hypothesises by the proposed recurrent neural network (RNN) in Liu et al. (2008). More precisely, it is proved that based on more general hypothesises, the state solution of the proposed RNN converges to the generalized eigenvector of symmetric positive pair, and its related generalized eigenvalue depends on the initial point of the state solution. Furthermore, the related largest and smallest generalized eigenvalues can also be obtained by the proposed RNN. Some related numerical experiments are also presented to illustrate our results.

[1]  Yiguang Liu,et al.  A simple functional neural network for computing the largest and smallest eigenvalues and corresponding eigenvectors of a real symmetric matrix , 2005, Neurocomputing.

[2]  Yimin Wei,et al.  Neural network approach to computing outer inverses based on the full rank representation , 2016 .

[3]  Qingshan Liu,et al.  A One-Layer Recurrent Neural Network With a Discontinuous Hard-Limiting Activation Function for Quadratic Programming , 2008, IEEE Transactions on Neural Networks.

[4]  Sitian Qin,et al.  A One-Layer Recurrent Neural Network for Pseudoconvex Optimization Problems With Equality and Inequality Constraints , 2017, IEEE Transactions on Cybernetics.

[5]  Jianping Li,et al.  Notes on "Recurrent neural network model for computing largest and smallest generalized eigenvalue" , 2010, Neurocomputing.

[6]  Kurt Hornik,et al.  Convergence analysis of local feature extraction algorithms , 1992, Neural Networks.

[7]  Erkki Oja,et al.  Subspace methods of pattern recognition , 1983 .

[8]  Qingfu Zhang,et al.  A class of learning algorithms for principal component analysis and minor component analysis , 2000, IEEE Trans. Neural Networks Learn. Syst..

[9]  John J. Hopfield,et al.  Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit , 1986 .

[10]  Lijun Zhao,et al.  Neural network for constrained nonsmooth optimization using Tikhonov regularization , 2015, Neural Networks.

[11]  Sitian Qin,et al.  A Two-Layer Recurrent Neural Network for Nonsmooth Convex Optimization Problems , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[12]  Jack Dongarra,et al.  Templates for the Solution of Algebraic Eigenvalue Problems , 2000, Software, environments, tools.

[13]  Yan Fu,et al.  Neural networks based approach for computing eigenvectors and eigenvalues of symmetric matrix , 2004 .

[14]  Gene H. Golub,et al.  Matrix computations , 1983 .

[15]  Fa-Long Luo,et al.  A principal component analysis algorithm with invariant norm , 1995, Neurocomputing.

[16]  Mauro Forti,et al.  Generalized neural network for nonsmooth nonlinear programming problems , 2004, IEEE Transactions on Circuits and Systems I: Regular Papers.

[17]  Chen Xu,et al.  A One-Layer Recurrent Neural Network for Constrained Complex-Variable Convex Optimization , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[18]  Juha Karhunen,et al.  Principal component neural networks — Theory and applications , 1998, Pattern Analysis and Applications.

[19]  Yimin Wei,et al.  Recurrent neural network for computation of generalized eigenvalue problem with real diagonalizable matrix pair and its applications , 2016, Neurocomputing.

[20]  Xinyi Le,et al.  A Neurodynamic Optimization Approach to Bilevel Quadratic Programming , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[21]  Predrag S. Stanimirović,et al.  Complex Neural Network Models for Time-Varying Drazin Inverse , 2016, Neural Computation.