On the computational power of Elman-style recurrent networks

Recently, Elman (1991) has proposed a simple recurrent network which is able to identify and classify temporal patterns. Despite the fact that Elman networks have been used extensively in many different fields, their theoretical capabilities have not been completely defined. Research in the 1960's showed that for every finite state machine there exists a recurrent artificial neural network which approximates it to an arbitrary degree of precision. This paper extends that result to architectures meeting the constraints of Elman networks, thus proving that their computational power is as great as that of finite state machines.

[1]  J. Elman Distributed representations, simple recurrent networks, and grammatical structure , 1991, Machine Learning.

[2]  Stanley C. Ahalt,et al.  Phonetic to acoustic mapping using recurrent neural networks , 1991, [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing.

[3]  C. Iooss From lattices of phonemes to sentences: a recurrent neural network approach , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[4]  Mark D. Hanes,et al.  Acoustic-to-phonetic mapping using recurrent neural networks , 1994, IEEE Trans. Neural Networks.

[5]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[6]  B Blumenfeld A connectionist approach to the recognition of trends in time-ordered medical parameters. , 1990, Computer methods and programs in biomedicine.

[7]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[8]  K.-F. Lee,et al.  Speaker-independent recognition of connected utterances using recurrent and non-recurrent neural networks , 1989, International 1989 Joint Conference on Neural Networks.