A new feedback neural network with supervised learning

A model is introduced for continuous-time dynamic feedback neural networks with supervised learning ability. Modifications are introduced to conventional models to guarantee precisely that a given desired vector, and its negative, are indeed stored in the network as asymptotically stable equilibrium points. The modifications entail that the output signal of a neuron is multiplied by the square of its associated weight to supply the signal to an input of another neuron. A simulation of the complete dynamics is then presented for a prototype one neuron with self-feedback and supervised learning; the simulation illustrates the (supervised) learning capability of the network.

[1]  John J. Hopfield,et al.  Simple 'neural' optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit , 1986 .

[2]  Pineda,et al.  Generalization of back-propagation to recurrent neural networks. , 1987, Physical review letters.

[3]  F. Salam,et al.  Complicated dynamics of prototype continuous-line adaptive control system , 1988 .

[4]  PAUL J. WERBOS,et al.  Generalization of backpropagation with application to a recurrent gas market model , 1988, Neural Networks.

[5]  F.M.A. Salam A formulation for the design of neural processors , 1988, IEEE 1988 International Conference on Neural Networks.

[6]  F. Salam,et al.  Some properties of dynamic feedback neural nets , 1988, Proceedings of the 27th IEEE Conference on Decision and Control.

[7]  Shi Bai,et al.  A feedback neural network with supervised learning , 1990, 1990 IJCNN International Joint Conference on Neural Networks.