Recurrent auto-associative networks and sequential processing

A novel connectionist architecture that develops static representations of structured sequences is presented. The model is based on simple recurrent networks trained on an auto-association task in a way that guarantees the development of unique static representations. The model can be applied in modeling natural language, cognition, etc.

[1]  N. Chater,et al.  Bootstrapping Word Boundaries: A Bottom-up Corpus-Based Approach to Speech Segmentation , 1997, Cognitive Psychology.

[2]  Terrence J. Sejnowski,et al.  Parallel Networks that Learn to Pronounce English Text , 1987, Complex Syst..

[3]  John Nerbonne,et al.  Modeling the Phonotactic Structure of Natural Language Words with Simple Recurrent Networks , 1998 .

[4]  J. Fodor,et al.  Connectionism and cognitive architecture: A critical analysis , 1988, Cognition.

[5]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[6]  Michael I. Jordan Attractor dynamics and parallelism in a connectionist sequential machine , 1990 .

[7]  James Alistair Hammerton Exploiting holistic computation : an evaluation of the sequential RAAM , 1999 .

[8]  Barry L. Kalman,et al.  Tail-recursive Distributed Representations and Simple Recurrent Networks , 1995 .

[9]  Geoffrey E. Hinton Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems , 1991 .

[10]  Jordan B. Pollack,et al.  Recursive Distributed Representations , 1990, Artif. Intell..

[11]  Garrison W. Cottrell,et al.  Grounding Meaning in Perception , 1990, GWAI.

[12]  Geoffrey E. Hinton Learning and Applying Contextual Constraints in Sentence Comprehension , 1991 .

[13]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.

[14]  D. Signorini,et al.  Neural networks , 1995, The Lancet.