A single-case study was carried out on a patient (KB), who presented with “aprosodia” following a right hemisphere stroke, to explore the cross-modal integration of auditory and visual cues in prosodic speech perception. KB was tested on two prosodic speech perception tasks: sentence intonation categorization (i.e., statement or question) and emphatic stress categorization (i.e., first or second noun was stressed). In addition, he was tested on two segmental speech perception tasks: McGurk Task and speech-in-noise. In all tasks, there were three presentation conditions: audio-only, visualonly, and audiovisual. Results showed that KB performed at about chance on both prosody perception tasks in all three presentation-conditions. In contrast, he performed near ceiling in the visualonly and audiovisual conditions on both tasks of segmental speech perception. His performance on the speech-in-noise task showed that he was able to use visual information to compensate for impoverished auditory information in segmental speech perception. Also, his results on the McGurk task were indicative of cross-modal integration in segmental speech perception. The results suggest that, although KB’s ability to process visual information in segmental speech tasks is intact, he is nonetheless unable to process prosodic speech information in either the auditory or visual modality.
[1]
L. Bernstein,et al.
Speech perception without hearing
,
2000,
Perception & psychophysics.
[2]
W. H. Sumby,et al.
Visual contribution to speech intelligibility in noise
,
1954
.
[3]
H. McGurk,et al.
Hearing lips and seeing voices
,
1976,
Nature.
[4]
L. Jakobson,et al.
Dissociations among functional subsystems governing melody recognition after right-hemisphere damage
,
2001,
Cognitive neuropsychology.
[5]
Isabelle Peretz,et al.
Processing Prosodic and Musical Patterns: A Neuropsychological Investigation
,
1998,
Brain and Language.
[6]
C G Fisher.
The visibility of terminal pitch contour.
,
1969,
Journal of speech and hearing research.