Norris, McQueen, and Cutler (2000) have put forth a model of word and phoneme processing—Merge—that they describe as purely bottom-up, yet capable of accounting for all of the ndings in the literature that have previously appeared to require the positing of top-down mechanisms. They argue that Merge should be preferred to all interactive models, because it can account for the data at least as well, with less theoretical baggage. In most respects, Merge is similar to other models of word and phoneme recognition. The initial level of processing, the ‘‘input phonemic’’ level, is similar to the input level in most other models (typically called the feature level; Norris et al. noted that this level could have been called the feature level in Merge). The top level of the model consists of lexical representations. As in models like TRACE (McClelland & Elman, 1986), each lexical representation competes with the others, via mutual inhibition. What makes Merge unique is the characterisation and connection pattern of a set of representations called the ‘‘output phoneme’’ level, in which there are also mutually inhibitory connections. The output phoneme level receives input from the input phoneme level, and from the lexical level. Although this architecture includes connections from the lexical level to the phonemic, Norris et al. assert that Merge is purely autonomous. The crux of this claim is their characterising the output phoneme level as not really an integral part of the word perception system. Instead, it should be thought of as an almost articial, task-specic construct, which the listener uses to meet the particular demands of an experimental situation.
[1]
James L. McClelland,et al.
Cognitive penetration of the mechanisms of perception: Compensation for coarticulation of lexically restored phonemes
,
1988
.
[2]
James L. McClelland,et al.
The TRACE model of speech perception
,
1986,
Cognitive Psychology.
[3]
Anne Cutler,et al.
Monitoring sentence comprehension
,
1979
.
[4]
Arthur G. Samuel,et al.
Early levels of analysis of speech.
,
1996
.
[5]
M. Pitt,et al.
Is Compensation for Coarticulation Mediated by the Lexicon
,
1998
.
[6]
A G Samuel,et al.
Knowing a Word Affects the Fundamental Perception of The Sounds Within it
,
2001,
Psychological science.
[7]
M. Pitt,et al.
Lexical activation (and other factors) can mediate compensation for coarticulation
,
2003
.
[8]
W. Ganong.
Phonetic categorization in auditory word perception.
,
1980,
Journal of experimental psychology. Human perception and performance.
[9]
A. Samuel.
Lexical Activation Produces Potent Phonemic Percepts
,
1997,
Cognitive Psychology.
[10]
V. Mann,et al.
Influence of preceding fricative on stop consonant perception.
,
1981,
The Journal of the Acoustical Society of America.
[11]
D Norris,et al.
Merging information in speech recognition: Feedback is never necessary
,
2000,
Behavioral and Brain Sciences.