Finite State Automata and Connectionist Machines: A Survey

Work in the literature related to Finite State Automata (FSAs) and Neural Networks (NNs) is review. These studies have dealt with Grammatical Inference tasks as well as how to represent FSAs through a neural model. The inference of Regular Grammars through NNs has been focused either on the acceptance or rejection of strings generated by the grammar or on the prediction of the possible successor(s) for each character in the string. Different neural architectures using first and second order connections were adopted. In order to extract the FSA inferred by a trained net, several techniques have been described in the literature, which are also reported here. Finally, theoretical work about the relationship between NNs and FSAs is outlined and discussed.

[1]  C. Lee Giles,et al.  Pruning recurrent neural networks for improved generalization performance , 1994, IEEE Trans. Neural Networks.

[2]  C. L. Giles,et al.  Rule refinement with recurrent neural networks , 1993, IEEE International Conference on Neural Networks.

[3]  C. L. Giles,et al.  Inserting rules into recurrent neural networks , 1992, Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop.

[4]  Alberto Sanfeliu,et al.  Understanding Neural Networks for Grammatical Inference and Recognition , 1993 .

[5]  C. Lee Giles,et al.  Experimental Comparison of the Effect of Order in Recurrent Neural Networks , 1993, Int. J. Pattern Recognit. Artif. Intell..

[6]  Marvin Minsky,et al.  Computation : finite and infinite machines , 2016 .

[7]  James L. McClelland,et al.  Finite State Automata and Simple Recurrent Networks , 1989, Neural Computation.

[8]  Alberto Sanfeliu,et al.  Representation and Recognition of Regular Grammars by Means of Second-Order Recurrent Neural Networks , 1993, IWANN.

[9]  C. Lee Giles,et al.  Training Second-Order Recurrent Neural Networks using Hints , 1992, ML.

[10]  James L. McClelland,et al.  Learning Subsequential Structure in Simple Recurrent Networks , 1988, NIPS.

[11]  C. Lee Giles,et al.  Learning and Extracting Finite State Automata with Second-Order Recurrent Neural Networks , 1992, Neural Computation.

[12]  Francisco Casacuberta,et al.  Inference of stochastic regular languages through simple recurrent networks , 1993 .

[13]  Ronald J. Williams,et al.  Experimental Analysis of the Real-time Recurrent Learning Algorithm , 1989 .

[14]  Padhraic Smyth,et al.  Discrete recurrent neural networks for grammatical inference , 1994, IEEE Trans. Neural Networks.

[15]  Scott E. Fahlman,et al.  The Recurrent Cascade-Correlation Architecture , 1990, NIPS.

[16]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .

[17]  Francisco Casacuberta,et al.  Simulation of Stochastic Regular Grammars through Simple Recurrent Networks , 1993, IWANN.

[18]  Panagiotis Manolios,et al.  First-Order Recurrent Neural Networks and Deterministic Finite State Automata , 1994, Neural Computation.

[19]  C. Lee Giles,et al.  Extracting and Learning an Unknown Grammar with Recurrent Neural Networks , 1991, NIPS.

[20]  David Zipser,et al.  Learning Sequential Structure with the Real-Time Recurrent Learning Algorithm , 1991, Int. J. Neural Syst..

[21]  W. Pitts,et al.  A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) , 2021, Ideas That Created the Future.

[22]  Srimat T. Chakradhar,et al.  First-order versus second-order single-layer recurrent neural networks , 1994, IEEE Trans. Neural Networks.

[23]  Michael I. Jordan Serial Order: A Parallel Distributed Processing Approach , 1997 .

[24]  Alberto Prieto,et al.  New Trends in Neural Computation , 1993 .

[25]  C. Lee Giles,et al.  Extraction, Insertion and Refinement of Symbolic Rules in Dynamically Driven Recurrent Neural Networks , 1993 .

[26]  Raymond L. Watrous,et al.  Induction of Finite-State Languages Using Second-Order Recurrent Networks , 1992, Neural Computation.

[27]  C. Lee Giles,et al.  Constructive learning of recurrent neural networks: limitations of recurrent cascade correlation and a simple solution , 1995, IEEE Trans. Neural Networks.

[28]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..