Is Lexical Access Driven by Temporal Order or Perceptual Salience? Evidence from British Sign Language

Is Lexical Access Driven by Temporal Order or Perceptual Salience? Evidence from British Sign Language Robin L. Thompson (robin.thompson@ucl.ac.uk) David P. Vinson (d.vinson@ucl.ac.uk) Neil Fox (neil.fox@ucl.ac.uk) Gabriella Vigliocco (g.vigliocco@ucl.ac.uk) Deafness, Cognition and Language Research Centre, Department of Cognitive, Perceptual and Brain Sciences University College London, 26 Bedford Way, London, WC1H 0AP, UK Abstract While processing spoken language, people look towards relevant objects, and the time course of their gaze(s) can inform us about online language processing (Tanenhaus et al, 1995). Here, we investigate lexical recognition in British Sign Language (BSL) using a visual world paradigm, the first such study using a signed language. Comprehension of spoken words and signs could be driven by temporal constraints regardless of modality (“first in, first processed”), or by perceptual salience which differs for speech (auditorialy perceived) and sign (visually perceived). Deaf BSL signers looked more often to semantically related distracter pictures than to unrelated pictures, replicating studies using acoustically-presented speech. For phonologically related pictures, gaze increased only for those sharing visually salient phonological features (i.e., location and movement features). Results are discussed in the context of language processing in different modalities. Overall, we conclude that lexical processing for both speech and sign is likely driven by perceptual salience and that potential differences in processing emerge from differences between visual and auditory systems. Keywords: lexical access; sign phonology, visual world; modality language; semantics, Introduction General theories of language processing have developed on the basis of extensive data from spoken, but not signed languages, making it impossible to tease apart those aspects of language processing that are truly general from those dependent on the oral-aural language modality. While spoken language processing happens through aural perception of sounds, sign language processing occurs through visual perception which allows for more simultaneous input of information; spoken languages make use of mouth and vocal tract, while signed languages use slower manual articulators (hands, as well as eyes, mouth and body). An understanding of the processing differences that arise from these differing language modalities is critical for understanding the interaction of language processing with other cognitive systems such as perception and action. Here we take advantage of these physical differences in language processing for signed languages compared to spoken languages to investigate the nature of lexical processing and lexical access. For spoken languages, it is generally uncontroversial that information is processed almost immediately as it comes in (e.g., Rayner & Clifton, 2009). Such incremental moment- by-moment language processing is likely necessary to keep up with the incredibly fast rate of speech input (estimated to be between 150-190 words per minute, Marslen-Wilson, 1973). However, during incremental processing listeners, processing even a single word, are faced with many possible alternatives that match the current acoustic-phonetic input. Empirical evidence suggests that instead of waiting until temporary ambiguities are resolved, partial activation of possible words (i.e., lexical competitors) that match current phonological information proceeds, with potential words being eliminated across time as more information becomes available (e.g., McClelland and Elman, 1986; Gaskell & Marslen-Wilson, 1997). Evidence for incremental activation of lexical competitors during spoken language processing comes from the “visual world” paradigm (language presented simultaneously with related pictures; Allopena, Magnuson, & Tanenhaus, 1998; Altman & Kamide, 2004; Huettig & Altmann, 2005; Yee & Sedivy, 2006). For example, in Allopena et al. (1998), subjects heard an utterance like “Pick up the beaker” while viewing a display with four pictures including: 1) an object matching the noun (the target; e.g. “beaker”), 2) an object with a name beginning with the same phoneme (e.g. “beetle”), 3) an object with a name sharing the same rhyme (e.g., “speaker”) and, 4) an unrelated object (e.g., carriage). The probability of fixating the target and onset competitor were identical immediately after word onset (when the two could not be distinguished from each other), and fixations to these picture types were higher than fixations to the rhyme or unrelated competitors. Immediately after reaching a phoneme differentiating the target and onset competitor, the probability of fixating the target rose sharply while the probability of fixating the related competitor fell. A weaker, but significant effect was also observed for rhyme competitors compared to unrelated competitors, indicating that activation is not restricted to words sharing onsets but is continuous (see for example McClelland and Elman, 1986). A question of interest, then, is why words that share onsets make the strongest lexical competitors. One possibility is that strong activation of onset competitors compared to word rhymes is due to temporal considerations: i.e., word onsets occur earlier in time. This view about the

[1]  K. Emmorey Language, Cognition, and the Brain: Insights From Sign Language Research , 2001 .

[2]  Paul D. Allopenna,et al.  Tracking the Time Course of Spoken Word Recognition Using Eye Movements: Evidence for Continuous Mapping Models , 1998 .

[3]  David M. Perlmutter SONORITY AND SYLLABLE STRUCTURE IN AMERICAN SIGN LANGUAGE , 1993 .

[4]  John Kingston,et al.  Papers in Laboratory Phonology: Index of names , 1990 .

[5]  Yuki Kamide,et al.  Now you see it, now you don't: mediating the mapping between language and the visual world , 2004 .

[6]  D. Corina,et al.  Modality and structure in signed and spoken languages: Psycholinguistic investigations of phonological structure in ASL , 2002 .

[7]  Kearsy Cormier,et al.  Modality and structure in signed and spoken languages: Frontmatter , 2002 .

[8]  WILLIAM MARSLEN-WILSON,et al.  Linguistic Structure and Speech Shadowing at Very Short Latencies , 1973, Nature.

[9]  Alexandra Jesse,et al.  Please Scroll down for Article the Quarterly Journal of Experimental Psychology Early Use of Phonetic Information in Spoken Word Recognition: Lexical Stress Drives Eye Movements Immediately , 2022 .

[10]  G. Altmann,et al.  Word meaning and the control of eye fixation: semantic competitor effects and the visual world paradigm , 2005, Cognition.

[11]  L. Goldstein,et al.  Lexical retrieval in American Sign Language production , 2006 .

[12]  David P. Vinson,et al.  The British Sign Language (BSL) norms for age of acquisition, familiarity, and iconicity , 2008, Behavior research methods.

[13]  K. Emmorey,et al.  Lexical Recognition in Sign Language: Effects of Phonetic Structure and Morphology , 1990, Perceptual and motor skills.

[14]  Julie C. Sedivy,et al.  Eye movements to pictures reveal transient semantic activation during spoken word recognition. , 2006, Journal of experimental psychology. Learning, memory, and cognition.

[15]  James L. McClelland,et al.  The TRACE model of speech perception , 1986, Cognitive Psychology.

[16]  K. Rayner,et al.  Language processing in reading and speech perception is fast and incremental: Implications for event-related potential research , 2009, Biological Psychology.

[17]  Julie C. Sedivy,et al.  Subject Terms: Linguistics Language Eyes & eyesight Cognition & reasoning , 1995 .

[18]  Wendy Sandler,et al.  Sign Language and Linguistic Universals: Entering the lexicon: lexicalization, backformation, and cross-modal borrowing , 2006 .

[19]  F. Grosjean Sign & Word Recognition: A First Comparison , 2013 .

[20]  William D. Marslen-Wilson,et al.  Integrating Form and Meaning: A Distributed Model of Speech Perception. , 1997 .