Fixed points in two-neuron discrete time recurrent networks: stability and bifurcation considerations

The position, number and stability types of xed points of a two{neuron recurrent network with nonzero weights are investigated. Using simple geometrical arguments in the space of derivatives of the sigmoid transfer functionwith respect to the weighted sum of neuron inputs, we partition the network state space into several regions corresponding to stability types of the xed points. If the neurons have the same mutual interaction pattern, i.e. they either mutually inhibit or mutually excite themselves, a lower bound on the rate of convergence of the attractive xed points towards the saturation values, as the absolute values of weights on the self{loops grow, is given. The role of weights in location of xed points is explored through an intuitively appealing characterization of neurons according to their inhibition/excitation performance in the network. In particular, each neuron can be of one of the four types: greedy, enthusiastic, altruistic or depressed. Both with and without the external inhibition/excitation sources, we investigate the position and number of xed points according to character of the neurons. When both neurons self-excite themselves and have the same mutual interaction pattern, the mechanism of creation of a new attractive xed point is shown to be that of saddle node bifurcation. 2

[1]  Morris W. Hirsch,et al.  Saturation at high gain in discrete time recurrent networks , 1994, Neural Networks.

[2]  James L. McClelland,et al.  Finite State Automata and Simple Recurrent Networks , 1989, Neural Computation.

[3]  Raymond L. Watrous,et al.  Induction of Finite-State Languages Using Second-Order Recurrent Networks , 1992, Neural Computation.

[4]  Michael C. Mozer,et al.  A Unified Gradient-Descent/Clustering Architecture for Finite State Machine Induction , 1993, NIPS.

[5]  H. Anton Calculus with analytic geometry , 1992 .

[6]  C. Lee Giles,et al.  Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks , 1992, Neural Computation.

[7]  Fred Cumminsy Representation of Temporal Patternsin Recurrent Networks , 1993 .

[8]  Liang Jin,et al.  Absolute stability conditions for discrete-time recurrent neural networks , 1994, IEEE Trans. Neural Networks.

[9]  Hava T. Siegelmann,et al.  The complexity of language recognition by neural networks , 1992, Neurocomputing.

[10]  Peter Tiňo,et al.  Finite State Machines and Recurrent Neural Networks -- Automata and Dynamical Systems Approaches , 1995 .

[11]  Xin Wang,et al.  Stability of fixed points and periodic orbits and bifurcations in analog neural networks , 1992, Neural Networks.

[12]  J J Hopfield,et al.  Neurons with graded response have collective computational properties like those of two-state neurons. , 1984, Proceedings of the National Academy of Sciences of the United States of America.

[13]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[14]  Fu-Sheng Tsung Relaxing the Symmetric Weight Condition for Convergent Dynamics in Discrete-Time Recurrent Networks , 1995 .

[15]  K. Doya,et al.  Bifurcations in the learning of recurrent neural networks , 1992, [Proceedings] 1992 IEEE International Symposium on Circuits and Systems.

[16]  Noga Alon,et al.  Efficient simulation of finite automata by neural nets , 1991, JACM.

[17]  Don R. Hush,et al.  On the node complexity of neural networks , 1994, Neural Networks.

[18]  Panagiotis Manolios,et al.  First-Order Recurrent Neural Networks and Deterministic Finite State Automata , 1994, Neural Computation.

[19]  Padhraic Smyth,et al.  Self-clustering recurrent networks , 1993, IEEE International Conference on Neural Networks.

[20]  Michael H. Freedman,et al.  Computation in discrete-time dynamical systems , 1995 .