Fixed point analysis for discrete-time recurrent neural networks

The author shows the existence of a fixed point for every recurrent neural network and uses a geometric approach to locate where the fixed points are. The stability is discussed for low-gain and high-gain situations. A generalized Hopfield saturation theorem is presented in a high gain situation for a discrete-time model version.<<ETX>>

[1]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[2]  Stephen Grossberg,et al.  Nonlinear neural networks: Principles, mechanisms, and architectures , 1988, Neural Networks.

[3]  Todd W. Troyer Dynamics of the Amari-Takeuchi competitive learning model , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[4]  Morris W. Hirsch SATURATED OUTPUTS FOR HIGH-GAIN, SELF-EXCITING NETWORKS , 1991 .