An unsupervised learning method for representing simple sentences

A recent neurocomputational study showed that it is possible for a model of the language areas of the brain (Wernicke's area, Broca's area, etc.) to learn to process words correctly [1]. This model is unique in that it is a neuroanatomically based model of word learning derived from the Wernicke-Lichtheim-Geschwind theory of language processing. For example, when subjected to simulated focal damage, the model breaks down in ways reminiscent of the classic aphasias. While such results are intriguing, this previous work was limited to processing only single words: nouns corresponding to concrete objects. Here we take the first steps towards generalizing the methods used in this earlier model to work with full sentences instead of isolated words. We gauge the richness of the neural representations that emerge during purely unsupervised learning in several ways. For example, using a separate “recognition network”, we demonstrate that the model's encoding of sentences is adequate to permit subsequent extraction of a symbolic, hierarchical representation of sentence meaning. Although our results are encouraging, substantial further work will be needed to create a large-scale model of the human cortical network for language.

[1]  David Caplan Neurolinguistics and linguistic aphasiology: Linguistic descriptions and aphasic syndromes , 1987 .

[2]  Teuvo Kohonen,et al.  Self-Organizing Maps , 2010 .

[3]  A. Damasioa,et al.  Neural systems behind word and concept retrieval , 2004 .

[4]  James A. Reggia,et al.  Temporally Asymmetric Learning Supports Sequence Processing in Multi-Winner Self-Organizing Maps , 2004, Neural Computation.

[5]  James A. Reggia,et al.  Simulating single word processing in the classic aphasia syndromes based on the Wernicke–Lichtheim–Geschwind theory , 2006, Brain and Language.

[6]  Michael I. Jordan,et al.  A more biologically plausible learning rule for neural networks. , 1991, Proceedings of the National Academy of Sciences of the United States of America.

[7]  C. Malsburg Self-organization of orientation sensitive cells in the striate cortex , 2004, Kybernetik.

[8]  David Poeppel,et al.  Towards a new functional anatomy of language , 2004, Cognition.

[9]  Roman Bek,et al.  Discourse on one way in which a quantum-mechanics language on the classical logical base can be built up , 1978, Kybernetika.

[10]  James A. Reggia,et al.  Mirror Symmetric Topographic Maps Can Arise from Activity-Dependent Synaptic Changes , 2005, Neural Computation.

[11]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[12]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[13]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .