The authors present a simple recurrent neural network which was trained to generate sequences of symbols classifying sequences of perceptual input originating from an environment. The symbols generated, although from a small set, constitute classifications of an environment which is both analog valued and which has strongly temporal features. The symbol grounding problem is addressed by relating the learned categories directly to the perceptual input, and by analyzing the representation space constructed by the network to perform the task. The authors demonstrate that such a grounded system can exhibit useful generalization, and that the internal representation of the symbolic classes is usually different than the traditional predicate logic approach.<<ETX>>
[1]
Stevan Harnad,et al.
Symbol grounding problem
,
1990,
Scholarpedia.
[2]
Garrison W. Cottrell,et al.
Grounding Meaning in Perception
,
1990,
GWAI.
[3]
James L. McClelland,et al.
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
,
1986
.
[4]
H. H. Chen,et al.
Recurrent neural networks, hidden Markov models and stochastic grammars
,
1990,
1990 IJCNN International Joint Conference on Neural Networks.
[5]
Jeffrey L. Elman,et al.
Finding Structure in Time
,
1990,
Cogn. Sci..
[6]
Robert B. Allen,et al.
Connectionist Language Users
,
1990
.