A Recurrent Neural Network for Linear Fractional Programming with Bound Constraints

This paper presents a novel recurrent time continuous neural network model which performs linear fractional optimization subject to bound constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with bound constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point chosen in the feasible bound region. Simulation results are given to demonstrate further the global convergence and the good performance of the proposed neural network for linear fractional programming problems with bound constraints.

[1]  B. Curtis Eaves,et al.  On the basic theorem of complementarity , 1971, Math. Program..

[2]  J. J. Hopfield,et al.  “Neural” computation of decisions in optimization problems , 1985, Biological Cybernetics.

[3]  Jun Wang,et al.  A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints , 2000, IEEE Trans. Neural Networks Learn. Syst..

[4]  D. Kinderlehrer,et al.  An introduction to variational inequalities and their applications , 1980 .

[5]  Abraham Charnes,et al.  Measuring the efficiency of decision making units , 1978 .

[6]  Jun Wang,et al.  Recurrent neural networks for linear programming: Analysis and design principles , 1992, Comput. Oper. Res..

[7]  Leon O. Chua,et al.  Neural networks for nonlinear programming , 1988 .

[8]  Jun Wang Analysis and design of a recurrent neural network for linear programming , 1993 .

[9]  C Tofallis,et al.  Fractional Programming: Theory, Methods and Applications , 1997, J. Oper. Res. Soc..

[10]  K. M. Mjelde Fractional Resource Allocation with S-Shaped Return Functions , 1983 .

[11]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[12]  J. P. Lasalle Stability theory for ordinary differential equations. , 1968 .

[13]  Abdesselam Bouzerdoum,et al.  Neural network for quadratic optimization with bound constraints , 1993, IEEE Trans. Neural Networks.

[14]  Zongben Xu,et al.  Asymmetric Hopfield-type networks: Theory and applications , 1996, Neural Networks.

[15]  Jun Wang,et al.  A general methodology for designing globally convergent optimization neural networks , 1998, IEEE Trans. Neural Networks.

[16]  Mokhtar S. Bazaraa,et al.  Nonlinear Programming: Theory and Algorithms , 1993 .

[17]  Andrzej Cichocki,et al.  Neural networks for optimization and signal processing , 1993 .

[18]  Jun Wang,et al.  A deterministic annealing neural network for convex programming , 1994, Neural Networks.