New Directions in Connectionist Language Modeling

This paper deals with the introduction of long term memory in a Multilevel Darwinist Brain (MDB) structure based on Artificial Neural Networks and its implications on the capability of adapting to new environments and recognizing previously explored ones by autonomous robots. The introduction of long term memory greatly enhances the ability of the organisms that implement the MDB to deal with changing environments and at the same time recover from failures and changes in configurations. The paper describes the mechanism, introduces the long term mermoy within it and provides some examples of its operation both in theoretical problems and on a real robot whose perceptual and actuation mechanisms are changed periodically.

[1]  Francisco Bellas,et al.  Modelling the world with statistically neutral PBGAs. Enhancement and real applications , 2002, Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02..

[2]  Jean-Luc Gauvain,et al.  Connectionist language modeling for large vocabulary continuous speech recognition , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[3]  Masami Nakamura,et al.  A study of English word category prediction based on neutral networks , 1989, International Conference on Acoustics, Speech, and Signal Processing,.

[4]  Wei Xu,et al.  Can artificial neural networks learn language models? , 2000, INTERSPEECH.

[5]  G. Edelman Neural Darwinism: The Theory Of Neuronal Group Selection , 1989 .

[6]  Paul Rodríguez,et al.  Comparing Simple Recurrent Networks and n-Grams in a Large Corpus , 2003, Applied Intelligence.

[7]  Yoshua Bengio,et al.  A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..

[8]  Francisco Casacuberta,et al.  MLP emulation of N-gram models as a first step to connectionist language modeling , 1999 .

[9]  J. Changeux,et al.  A theory of the epigenesis of neuronal networks by selective stabilization of synapses. , 1973, Proceedings of the National Academy of Sciences of the United States of America.

[10]  Lalit R. Bahl,et al.  A Maximum Likelihood Approach to Continuous Speech Recognition , 1983, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[11]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[12]  M Conrad,et al.  Complementary molecular models of learning and memory. , 1976, Bio Systems.

[13]  Christopher M. Bishop,et al.  Neural networks for pattern recognition , 1995 .

[14]  Peter Nordin,et al.  Evolution of a world model for a miniature robot using genetic programming , 1998, Robotics Auton. Syst..

[15]  Frederick Jelinek,et al.  Statistical methods for speech recognition , 1997 .

[16]  M. Conrad,et al.  Evolutionary learning circuits. , 1974, Journal of theoretical biology.

[17]  Luc Steels,et al.  Emergent functionality in robotic agents through on-line evolution , 1994 .