5 Probabilistic Association and Denotation in Machine Learning of Natural c Language

We have taken what we believe is a new tack in the approach to machine learning by using in a very explicit way principles of association and generalization derived from classical psychological principles. The principles we used were, however, much more specific and technically developed. The fundamental role of association as a basis for conditioning is thoroughly recognized in modern neuroscience and is essential to the experimental study of the neuronal activity of a variety of animals. For similar reasons its role is just as central to the learning theory of neural networks, now rapidly developing in many different directions. We have not, however, made explicit use of neural networks, but have worked out our theory of language learning at a higher level of abstraction. In our judgment the difficulties we face need to be solved before a still more detailed theory is developed. The classical psychological principles of learning lased here have been thought by linguists to be wholly inadequate as the basis for a theory of language learning. Nothing could be further from the truth. Skinner's [6] naive formulation of the problems of language leaning was rightly attacked by Chomsky [s], but no serious alternative learning theory has been offered by linguists even toddy. In the first section we briefly describe our approach to machine learning of natural language. In the second section we focus on the problem of denotation that is important in our use of probabilistic association of words and their meaning. In the third section we outline the backgound cognitive and perceptual assumptions of our machine learning work. In the fourth section we formulate explicitly our two general