A comparison of audiovisual and auditory-only training on the perception of spectrally-distorted speech

Recent research suggests that using visual speech in auditory training can improve auditory-only speech perception. The long term aim of our work is to investigate this approach for hearing-impaired users, in particular cochlear-implant users. In the pilot study presented in this paper, we use spectrally-distorted speech to train two different groups of normal hearing subjects: native English and non-native, English-speaking Saudi listeners. Our pilot study suggests that both groups attain similar improvement in audio-only speech perception when visual speech is introduced into the training process. This may provide evidence that cochlear implant users would benefit from introducing visual speech in training, given that the reduced processing abilities of non-native listeners for native speech could be compared to the reduced processing abilities of cochlear implant users as a result of the inherent noise in the processing of sound by a cochlear implant.

[1]  Anders Højen,et al.  Early learners' discrimination of second-language vowels. , 2006, The Journal of the Acoustical Society of America.

[2]  A. Thornton,et al.  Neural Correlates of Perceptual Learning in the Auditory Brainstem: Efferent Activity Predicts and Reflects Improvement at a Speech-in-Noise Discrimination Task , 2008, The Journal of Neuroscience.

[3]  Sharon M. Thomas,et al.  Audiovisual Cues and Perceptual Learning of Spectrally Distorted Speech , 2011, Language and speech.

[4]  B. Fraysse,et al.  Evidence that cochlear-implanted deaf patients are better multisensory integrators , 2007, Proceedings of the National Academy of Sciences.

[5]  Matthew H. Davis,et al.  Lexical information drives perceptual learning of distorted speech: evidence from the comprehension of noise-vocoded sentences. , 2005, Journal of experimental psychology. General.

[6]  Jon Barker,et al.  An audio-visual corpus for speech perception and automatic speech recognition. , 2006, The Journal of the Acoustical Society of America.

[7]  Matthew H. Davis,et al.  Speech recognition in adverse conditions: A review , 2012 .

[8]  B. Walden,et al.  Effects of training on the visual recognition of consonants. , 1977, Journal of speech and hearing research.

[9]  L. Bernstein,et al.  Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training , 2013, Front. Neurosci..

[10]  Tessa Bent,et al.  Perceptual adaptation and intelligibility of multiple talkers for two types of degraded speech. , 2009, The Journal of the Acoustical Society of America.

[11]  R. Sweetow,et al.  Technologic Advances in Aural Rehabilitation: Applications and Innovative Methods of Service Delivery , 2007, Trends in amplification.

[12]  Anders Löfqvist,et al.  Vowel spaces in Swedish adolescents with cochlear implants. , 2010, The Journal of the Acoustical Society of America.

[13]  Yôiti Suzuki,et al.  Bimodal audio–visual training enhances auditory adaptation process , 2009, Neuroreport.

[14]  N. Kraus,et al.  The time course of auditory perceptual learning: neurophysiological changes during speech‐sound training , 1998, Neuroreport.

[15]  Peter F. Assmann,et al.  The Perception of Speech Under Adverse Conditions , 2004 .

[16]  Tim Pring,et al.  Speech perception in noise by monolingual, bilingual and trilingual listeners. , 2010, International journal of language & communication disorders.

[17]  M F Dorman,et al.  The Identification of Consonants and Vowels by Cochlear Implant Patients Using a 6‐Channel Continuous Interleaved Sampling Processor and by Normal‐Hearing Subjects Using Simulations of Processors with Two to Nine Channels , 1998, Ear and hearing.

[18]  Anne Cutler,et al.  Non-native speech perception in adverse conditions: A review , 2010, Speech Commun..

[19]  Qian-Jie Fu,et al.  Perception of speech produced by native and nonnative talkers by listeners with normal hearing and listeners with cochlear implants. , 2014, Journal of speech, language, and hearing research : JSLHR.