Development of a test battery for evaluating speech perception in complex listening environments.

In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.

[1]  S. Soli,et al.  Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. , 1994, The Journal of the Acoustical Society of America.

[2]  L. Rabiner,et al.  Binaural release from masking for speech and gain in intelligibility. , 1967, The Journal of the Acoustical Society of America.

[3]  C M Rankovic,et al.  Estimating articulation scores. , 1997, The Journal of the Acoustical Society of America.

[4]  B E Walden,et al.  Evaluating the articulation index for auditory-visual consonant recognition. , 1996, The Journal of the Acoustical Society of America.

[5]  Louis D. Braida,et al.  Evaluating the articulation index for auditory-visual input. , 1987, The Journal of the Acoustical Society of America.

[6]  Richard H. Wilson,et al.  A comparison of two word-recognition tasks in multitalker babble: Speech Recognition in Noise Test (SPRINT) and Words-in-Noise Test (WIN). , 2008, Journal of the American Academy of Audiology.

[7]  K. Grant,et al.  Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration. , 1998, The Journal of the Acoustical Society of America.

[8]  K. D. Kryter Methods for the Calculation and Use of the Articulation Index , 1962 .

[9]  R Plomp,et al.  Auditory handicap of hearing impairment and the limited benefit of hearing aids. , 1978, The Journal of the Acoustical Society of America.

[10]  D S Brungart,et al.  Informational and energetic masking effects in the perception of two simultaneous talkers. , 2001, The Journal of the Acoustical Society of America.

[11]  M. Killion,et al.  Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. , 2004, The Journal of the Acoustical Society of America.

[12]  Sridhar Kalluri,et al.  Objective measures of listening effort: effects of background noise and noise reduction. , 2009, Journal of speech, language, and hearing research : JSLHR.

[13]  Elena Grassi,et al.  Auditory models of suprathreshold distortion and speech intelligibility in persons with impaired hearing. , 2013, Journal of the American Academy of Audiology.

[14]  S. Gordon-Salant,et al.  Effects of stimulus and noise rate variability on speech perception by younger and older adults. , 2004, The Journal of the Acoustical Society of America.

[15]  Gerald Kidd,et al.  The effects of hearing loss and age on the benefit of spatial separation between multiple talkers in reverberant rooms. , 2008, The Journal of the Acoustical Society of America.

[16]  Charles Speaks,et al.  Subjective vs. objective intelligibility of sentences in listeners with hearing loss. , 1998, Journal of speech, language, and hearing research : JSLHR.