Stability Analysis of Gradient-Based Neural Networks for Optimization Problems

The paper introduces a new approach to analyze the stability of neural network models without using any Lyapunov function. With the new approach, we investigate the stability properties of the general gradient-based neural network model for optimization problems. Our discussion includes both isolated equilibrium points and connected equilibrium sets which could be unbounded. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any trajectory of the gradient-based neural network converges to an equilibrium point, and (b) the Lyapunov stability is equivalent to the asymptotical stability in the gradient-based neural networks. For a convex optimization problem, under the same assumptions, we show that any trajectory of gradient-based neural networks will converge to an asymptotically stable equilibrium point of the neural networks. For a general nonlinear objective function, we propose a refined gradient-based neural network, whose trajectory with any arbitrary initial point will converge to an equilibrium point, which satisfies the second order necessary optimality conditions for optimization problems. Promising simulation results of a refined gradient-based neural network on some problems are also reported.

[1]  E. Coddington,et al.  Theory of Ordinary Differential Equations , 1955 .

[2]  G. P. Szegö,et al.  Stability theory of dynamical systems , 1970 .

[3]  W K Chen,et al.  A high-performance neural network for solving linear and quadratic programming problems , 1996, IEEE Trans. Neural Networks.

[4]  Malur K. Sundareshan,et al.  Exponential stability and a systematic synthesis of a neural network for quadratic minimization , 1991, Neural Networks.

[5]  Bingsheng He,et al.  A new method for a class of linear variational inequalities , 1994, Math. Program..

[6]  Stefen Hui,et al.  Solving linear programming problems with neural networks: a comparative study , 1995, IEEE Trans. Neural Networks.

[7]  G. Lin Nonlinear Programming without Computation , 2022 .

[8]  David A. Sánchez,et al.  Ordinary Differential Equations and Stability Theory: An Introduction , 1968 .

[9]  Michael A. Shanblatt,et al.  A two-phase optimization neural network , 1992, IEEE Trans. Neural Networks.

[10]  Andrzej Cichocki,et al.  Neural networks for optimization and signal processing , 1993 .

[11]  Boris Polyak,et al.  Constrained minimization methods , 1966 .

[12]  Weiping Li,et al.  Applied Nonlinear Control , 1991 .

[13]  J. Willems,et al.  Stability theory of dynamical systems , 1970 .

[14]  Michael A. Shanblatt,et al.  Linear and quadratic programming neural network analysis , 1992, IEEE Trans. Neural Networks.

[15]  John J. Hopfield,et al.  Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit , 1986 .

[16]  A. Goldstein Convex programming in Hilbert space , 1964 .

[17]  Shengwei Zhang,et al.  Lagrange programming neural networks , 1992 .

[18]  Edgar Sanchez-Sinencio,et al.  Nonlinear switched capacitor 'neural' networks for optimization problems , 1990 .

[19]  B. He A class of projection and contraction methods for monotone variational inequalities , 1997 .

[20]  Bingsheng He,et al.  A projection and contraction method for a class of linear complementarity problems and its application in convex quadratic programming , 1992 .

[21]  Abdesselam Bouzerdoum,et al.  Neural network for quadratic optimization with bound constraints , 1993, IEEE Trans. Neural Networks.

[22]  Jun Wang,et al.  A general methodology for designing globally convergent optimization neural networks , 1998, IEEE Trans. Neural Networks.

[23]  S. Fang,et al.  Solving convex programming problems with equality constraints by neural networks , 1998 .

[24]  Jerzy Zabczyk,et al.  Mathematical control theory - an introduction , 1992, Systems & Control: Foundations & Applications.

[25]  Leon O. Chua,et al.  Neural networks for nonlinear programming , 1988 .

[26]  Stefen Hui,et al.  On solving constrained optimization problems with neural networks: a penalty method approach , 1993, IEEE Trans. Neural Networks.

[27]  Zeng-Guang Hou,et al.  A neural network for hierarchical optimization of nonlinear large-scale systems , 1998, Int. J. Syst. Sci..

[28]  Liao Li-Zhi,et al.  A neural network for the linear complementarity problem , 1999 .

[29]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[30]  Li-Zhi Liao,et al.  Solving nonlinear complementarity problems with neural networks: a reformulation method approach , 2001 .

[31]  B. He Solving a Class of Linear Projection Equations ? , 1994 .

[32]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[33]  Jun Wang,et al.  A recurrent neural network for optimizing a continuously differentiable objective function with bound constraints , 1999, Proceedings of the 38th IEEE Conference on Decision and Control (Cat. No.99CH36304).