Limiting spectral resolution in speech for listeners with sensorineural hearing loss.

Consonant recognition was measured as a function of the degree of spectral resolution of the speech stimulus in normally hearing listeners and listeners with moderate sensorineural hearing loss. Previous work (Turner, Souza, and Forget, 1995) has shown that listeners with sensorineural hearing loss could recognize consonants as well as listeners with normal hearing when speech was processed to have only one channel of spectral resolution. The hypothesis tested in the present experiment was that when speech was limited to a small number of spectral channels, both normally hearing and hearing-impaired listeners would continue to perform similarly. As the stimuli were presented with finer degrees of spectral resolution, and the poorer-than-normal spectral resolving abilities of the hearing-impaired listeners became a limiting factor, one would predict that the performance of the hearing-impaired listeners would then become poorer than the normally hearing listeners. Previous research on the frequency-resolution abilities of listeners with mild-to-moderate hearing loss suggests that these listeners have critical bandwidths three to four times larger than do listeners with normal hearing. In the present experiment, speech stimuli were processed to have 1, 2, 4, or 8 channels of spectral information. Results for the 1-channel speech condition were consistent with the previous study in that both groups of listeners performed similarly. However, the hearing-impaired listeners performed more poorly than the normally hearing listeners for all other conditions, including the 2-channel speech condition. These results would appear to contradict the original hypothesis, in that listeners with moderate sensorineural hearing loss would be expected to have at least 2 channels of frequency resolution. One possibility is that the frequency resolution of hearing-impaired listeners may be much poorer than previously estimated; however, a subsequent filtered speech experiment did not support this explanation. The present results do indicate that although listeners with hearing loss are able to use the temporal-envelope information of a single channel in a normal fashion, when given the opportunity to combine information across more than one channel, they show deficient performance.

[1]  Harvey b. Fletcher,et al.  Speech and hearing in communication , 1953 .

[2]  F. J. Hill,et al.  Speech recognition as a function of channel capacity in a discrete set of channels. , 1968, The Journal of the Acoustical Society of America.

[3]  M. Schroeder Reference Signal for Signal Quality Studies , 1968 .

[4]  J. Flanagan Speech Analysis, Synthesis and Perception , 1971 .

[5]  A R Thornton,et al.  Low-frequency hearing loss: perception of filtered speech, psychophysical tuning curves, and masking. , 1977, The Journal of the Acoustical Society of America.

[6]  M. Ruggero,et al.  Kanamycin and bumetanide ototoxicity: Anatomical, physiological and behavioral correlates , 1982, Hearing Research.

[7]  D A Nelson,et al.  Pure tone pitch perception and low-frequency hearing loss. , 1983, The Journal of the Acoustical Society of America.

[8]  C V Pavlovic Use of the articulation index for assessing residual auditory function in listeners with sensorineural hearing impairment. , 1984, The Journal of the Acoustical Society of America.

[9]  M. Liberman,et al.  Single-neuron labeling and chronic cochlear pathology. III. Stereocilia damage and alterations of threshold tuning curves , 1984, Hearing Research.

[10]  D D Dirks,et al.  Speech recognition and the Articulation Index for normal and hearing-impaired listeners. , 1985, The Journal of the Acoustical Society of America.

[11]  N. Viemeister,et al.  Temporal modulation transfer functions in normal-hearing and hearing-impaired listeners. , 1985, Audiology : official organ of the International Society of Audiology.

[12]  B C Moore,et al.  Auditory filter shapes in subjects with unilateral and bilateral cochlear impairments. , 1986, The Journal of the Acoustical Society of America.

[13]  C. Turner,et al.  Spread of masking in normal subjects and in subjects with high-frequency hearing loss. , 1986, Audiology : official organ of the International Society of Audiology.

[14]  M. Robb,et al.  Audibility and recognition of stop consonants in normal and hearing-impaired subjects. , 1987, The Journal of the Acoustical Society of America.

[15]  Robert D. Celmer,et al.  Critical Bands in the Perception of Speech Signals by Normal and Sensorineural Hearing Loss Listeners , 1987 .

[16]  D D Dirks,et al.  Auditory filter characteristics and consonant recognition for hearing-impaired listeners. , 1989, The Journal of the Acoustical Society of America.

[17]  D J Van Tasell,et al.  Temporal cues for consonant recognition: training, talker generalization, and use in evaluation of cochlear implants. , 1992, The Journal of the Acoustical Society of America.

[18]  S. Bacon,et al.  Modulation detection in subjects with relatively flat hearing losses. , 1992, Journal of speech and hearing research.

[19]  B. Moore,et al.  Effects of spectral smearing on the intelligibility of sentences in noise , 1993 .

[20]  J M Festen,et al.  Limited resolution of spectral contrast and hearing loss for speech in noise. , 1993, The Journal of the Acoustical Society of America.

[21]  R. Plomp,et al.  Effect of spectral envelope smearing on speech reception. II. , 1992, The Journal of the Acoustical Society of America.

[22]  R. Drullman Temporal envelope and fine structure cues for speech intelligibility , 1994 .

[23]  Jont B. Allen,et al.  How do humans process and recognize speech? , 1993, IEEE Trans. Speech Audio Process..

[24]  C W Turner,et al.  Use of temporal envelope cues in speech recognition by normal and hearing-impaired listeners. , 1995, The Journal of the Acoustical Society of America.

[25]  R V Shannon,et al.  Speech Recognition with Primarily Temporal Cues , 1995, Science.

[26]  Speech recognition at higher than normal speech and noise levels , 1995 .

[27]  M. Dorman,et al.  Speech intelligibility as a function of the number of channels of stimulation for signal processors using sine-wave and noise-band outputs. , 1997, The Journal of the Acoustical Society of America.

[28]  R V Shannon,et al.  Speech recognition as a function of the number of electrodes used in the SPEAK cochlear implant speech processor. , 1997, Journal of speech, language, and hearing research : JSLHR.

[29]  S. D. Thomas,et al.  Gap detection as a function of stimulus loudness for listeners with and without hearing loss. , 1997, Journal of speech, language, and hearing research : JSLHR.

[30]  C. Turner,et al.  High-frequency audibility: benefits for hearing-impaired listeners. , 1998, The Journal of the Acoustical Society of America.

[31]  D Byrne,et al.  Speech recognition of hearing-impaired listeners: predictions from audibility and the limited role of high-frequency amplification. , 1998, The Journal of the Acoustical Society of America.