An Approach to Learning in Hopfield Neural Networks

In this paper we present some preliminary ideas for the design of a continuous nonlinear neural networks with "learning." Specifically, we introduce the idea of learning in Hopfield recursive neural networks. The network is trained so that application of a set of inputs produces the desired set of outputs. A method is developed to determine the interconnecting weights for the network, so as to achieve the desired stable equilibrium points. Also, this method illustrates a way to 'learn' the interconnecting weights that are not computed a priori. Conditions are obtained for the asymptotic stability of the equilibrium points. An illustrative simulation is presented.

[1]  Allon Guez,et al.  On the stability, storage capacity, and design of nonlinear continuous neural networks , 1988, IEEE Trans. Syst. Man Cybern..

[2]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.