Convergence of a Recurrent Neural Network for Nonconvex Optimization Based on an Augmented Lagrangian Function

In the paper, a recurrent neural network based on an augmented Lagrangian function is proposed for seeking local minima of nonconvex optimization problems with inequality constraints. First, each equilibrium point of the neural network corresponds to a Karush-Kuhn-Tucker (KKT) point of the problem. Second, by appropriately choosing a control parameter, the neural network is asymptotically stable at those local minima satisfying some mild conditions. The latter property of the neural network is ensured by the convexification capability of the augmented Lagrangian function. The proposed scheme is inspired by many existing neural networks in the literature and can be regarded as an extension or improved version of them. A simulation example is discussed to illustrate the results.

[1]  J. J. Hopfield,et al.  “Neural” computation of decisions in optimization problems , 1985, Biological Cybernetics.

[2]  Weiping Li,et al.  Applied Nonlinear Control , 1991 .

[3]  Jun Wang,et al.  A deterministic annealing neural network for convex programming , 1994, Neural Networks.

[4]  Jun Wang,et al.  A recurrent neural network with exponential convergence for solving convex quadratic program and related linear piecewise equations , 2004, Neural Networks.

[5]  Jun Wang,et al.  A projection neural network and its application to constrained optimization problems , 2002 .

[6]  Xiaolin Hu,et al.  Solving Pseudomonotone Variational Inequalities and Pseudoconvex Optimization Problems Using the Projection Neural Network , 2006, IEEE Transactions on Neural Networks.

[7]  Abdesselam Bouzerdoum,et al.  Neural network for quadratic optimization with bound constraints , 1993, IEEE Trans. Neural Networks.

[8]  John J. Hopfield,et al.  Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit , 1986 .

[9]  Xue-Bin Liang,et al.  Global exponential stability of neural networks with globally Lipschitz continuous activations and its application to linear variational inequality problem , 2001, IEEE Trans. Neural Networks.

[10]  Youshen Xia,et al.  A recurrent neural network for nonlinear convex optimization subject to nonlinear inequality constraints , 2004, IEEE Trans. Circuits Syst. I Regul. Pap..

[11]  Yuancan Huang,et al.  Lagrange-Type Neural Networks for Nonlinear Programming Problems with Inequality Constraints , 2005, Proceedings of the 44th IEEE Conference on Decision and Control.

[12]  Jun Wang,et al.  On the Stability of Globally Projected Dynamical Systems , 2000 .

[13]  Dimitri P. Bertsekas,et al.  Constrained Optimization and Lagrange Multiplier Methods , 1982 .

[14]  Jun Wang,et al.  A recurrent neural network for solving nonlinear convex programs subject to linear constraints , 2005, IEEE Transactions on Neural Networks.

[15]  David G. Luenberger,et al.  Linear and Nonlinear Programming: Second Edition , 2003 .

[16]  Jun Wang,et al.  A recurrent neural network for solving linear projection equations , 2000, Neural Networks.

[17]  Xiaolin Hu,et al.  A Recurrent Neural Network for Solving Nonconvex Optimization Problems , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[18]  Edgar Sanchez-Sinencio,et al.  Nonlinear switched capacitor 'neural' networks for optimization problems , 1990 .

[19]  Shengwei Zhang,et al.  Lagrange programming neural networks , 1992 .

[20]  Jun Wang Analysis and design of a recurrent neural network for linear programming , 1993 .