A PDP Approach to Processing Center-Embedded Sentences

Recent PDP models have been shown to have great promise in contributing to the understanding of the mechanisms which subserve language processing. In this paper we address the specific question of how multiply embedded sentences might be processed. It has been shown experimentally that comprehension of center-embedded structures is poor relative to right-branching structures. It also has been demonstrated that this effect can be attenuated, such that the presence of semantically constrained lexical items in center-embedded sentences improves processing performance. This raises two questions: (1) What is it about the processing mechanism that makes center-embedded sentences relatively difficult? (2) How are the effects of semantic bias accounted for? Following an approach outlined in Elman (1990, 1991), we train a simple recurrent network in a prediction task on various syntactic structures, including center-embedded and right-branching sentences. As the results show, the behavior of the network closely resembles the pattern of experimental data, both in yielding superior performance in right-branching structures (compared with center-embeddings), and in processing center-embeddings better when they involve semantically constrained lexical items. This suggests that the recurrent network may provide insight into the locus of similar effects in humans.