The processing of information from multiple sources in simultaneous interpreting

Language processing is influenced by multiple sources of information. We examined whether the performance in simultaneous interpreting would be improved when providing two sources of information, the auditory speech as well as corresponding lip-movements, in comparison to presenting the auditory speech alone. Although there was an improvement in sentence recognition when presented with visible speech, there was no difference in performance between these two presentation conditions when bilinguals simultaneously interpreted from English to German or from English to Spanish. The reason why visual speech did not contribute to performance could be the presentation of the auditory signal without noise (Massaro, 1998). This hypothesis should be tested in the future. Furthermore, it should be investigated if an effect of visible speech can be found for other contexts, when visual information could provide cues for emotions, prosody, or syntax.

[1]  Dominic W. Massaro,et al.  Information processing and a computational approach to the study of simultaneous interpretation , 1997 .

[2]  Yonghong Yan,et al.  Universal speech tools: the CSLU toolkit , 1998, ICSLP.

[3]  W. H. Sumby,et al.  Visual contribution to speech intelligibility in noise , 1954 .

[4]  Alexander H. Waibel,et al.  Interactive Translation of Conversational Speech , 1996, Computer.

[5]  Jennifer M. Glass,et al.  Virtually Perfect Time Sharing in Dual-Task Performance: Uncorking the Central Cognitive Bottleneck , 2001, Psychological science.

[6]  Linda Anderson,et al.  Simultaneous interpretation : contextual and translation aspects , 1979 .

[7]  H. McGurk,et al.  Hearing lips and seeing voices , 1976, Nature.

[8]  C. Benoît,et al.  Effects of phonetic context on audio-visual intelligibility of French. , 1994, Journal of speech and hearing research.

[9]  Kevin G Munhall,et al.  A Case of Impaired Auditory and Visual Speech Prosody Perception after Right Hemisphere Damage , 2002, Neurocase.

[10]  M E Demorest,et al.  A computational approach to analyzing sentential speech perception: phoneme-to-phoneme stimulus-response alignment. , 1994, The Journal of the Acoustical Society of America.

[11]  David E. Kieras,et al.  Précis to a practical unified theory of cognition and action: Some lessons from EPIC computational models of human multiple-task performance , 1997 .

[12]  Joseph H. Danks,et al.  Cognitive processes in translation and interpreting , 1997 .

[13]  D Gerver,et al.  The effects of noise on the performance of simultaneous interpreters: accuracy of performance. , 1974, Acta psychologica.

[14]  Dominic W. Massaro,et al.  Perception of Synthetic Visual Speech , 1996 .

[15]  Michael M. Cohen,et al.  Modeling Coarticulation in Synthetic Visual Speech , 1993 .

[16]  D. Massaro Perceiving talking faces: from speech perception to a behavioral principle , 1999 .