Sentence recognition materials based on frequency of word use and lexical confusability.

The sentence stimuli developed in this project combined aspects from several traditional approaches to speech audiometry. Sentences varied with respect to frequency of word use and phonetic confusability. Familiar consonant-vowel-consonant words, nouns and modifiers, were used to form 500 sentences of seven to nine syllables. Based on concepts from the Neighborhood Activation Model for spoken word recognition, each sentence contained three key words that were all characterized as high- or low-use frequency and high or low lexical confusability. Use frequency was determined by published indices of word use, and lexical confusability was defined by a metric based on the number of other words that were similar to a given word using a single phoneme substitution algorithm. Thirty-two subjects with normal hearing were randomly assigned to one of seven presentation levels in quiet, and an additional 32 listeners were randomly assigned to a fixed-level noise background at one of six signal-to-noise ratios. The results indicated that in both quiet and noise listening conditions, high-use words were more intelligible than low-use words, and there was an advantage for phonetically unique words; the position of the key word in the sentence was also a significant factor. These data formed the basis for a sequence of experiments that isolated significant nonacoustic sources of variation in spoken word recognition.

[1]  S. S. Stevens,et al.  The development of recorded auditory tests for measuring hearing loss for speech , 1947, The Laryngoscope.

[2]  Ira J. Hirsh,et al.  CX Problems Related to the Use of Speech in Clinical Audiometry , 1955, The Annals of otology, rhinology, and laryngology.

[3]  F. Grosjean,et al.  The gating paradigm: A comparison of successive and individual presentation formats , 1984 .

[4]  H. Kucera,et al.  Computational analysis of present-day American English , 1967 .

[5]  L H Nakatani,et al.  A sensitive test of speech communication quality. , 1973, The Journal of the Acoustical Society of America.

[6]  G F Smoorenburg,et al.  Speech reception in quiet and in noisy conditions by individuals with noise-induced hearing loss in relation to their tone audiogram. , 1989, The Journal of the Acoustical Society of America.

[7]  A. M. Mimpen,et al.  Improving the reliability of testing the speech reception threshold for sentences. , 1979, Audiology : official organ of the International Society of Audiology.

[8]  John Bamford,et al.  Speech-hearing tests and the spoken language of hearing-impaired children , 1979 .

[9]  W. Marslen-Wilson,et al.  The temporal structure of spoken language understanding , 1980, Cognition.

[10]  F Grosjean,et al.  Spoken word recognition processes and the gating paradigm , 1980, Perception & psychophysics.

[11]  D. Pisoni,et al.  Acoustic-phonetic representations in word recognition , 1987, Cognition.

[12]  J. P. Egan Articulation testing methods , 1948, The Laryngoscope.

[13]  W Melnick,et al.  American National Standard specifications for audiometers. , 1971, ASHA.

[14]  S. Goldinger,et al.  Priming Lexical Neighbors of Spoken Words: Effects of Competition and Inhibition. , 1989, Journal of memory and language.

[15]  Roger P. Hamernik,et al.  Basic and Applied Aspects of Noise-Induced Hearing Loss , 1986, NATO ASI Series.

[16]  D. Pisoni,et al.  Recognizing Spoken Words: The Neighborhood Activation Model , 1998, Ear and hearing.

[17]  Speech Perception in Individuals with Noise-Induced Hearing Loss and its Implication for Hearing Loss Criteria , 1986 .

[18]  D. Dirks,et al.  Examination of the Neighborhood Activation Theory in Normal and Hearing-Impaired Listeners , 2001, Ear and hearing.

[19]  C Speaks,et al.  A new approach to speech audiometry. , 1968, The Journal of speech and hearing disorders.

[20]  John Morton,et al.  Facilitation in Word Recognition: Experiments Causing Change in the Logogen Model , 1979 .

[21]  A. Salasoo,et al.  Interaction of Knowledge Sources in Spoken Word Identification. , 1985, Journal of memory and language.

[22]  Robyn M. Cox,et al.  Development of the Connected Speech Test (CST) , 1987, Ear and hearing.

[23]  Harvey Fletcher,et al.  Speech and hearing. , 1930, Health services manager.

[24]  P. Luce,et al.  A computational analysis of uniqueness points in auditory word recognition , 1986, Perception & psychophysics.

[25]  S. Soli,et al.  Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. , 1994, The Journal of the Acoustical Society of America.

[26]  L L Elliott,et al.  Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. , 1977, The Journal of the Acoustical Society of America.

[27]  George A. Miller,et al.  Language and Communication , 1951 .

[28]  Robert C. Bilger,et al.  Standardization of a Test of Speech Perception in Noise , 1984 .

[29]  I. Hirsh,et al.  Development of materials for speech audiometry. , 1952, The Journal of speech and hearing disorders.

[30]  Charles Speaks,et al.  Method for Measurement of Speech Identification , 1965 .

[31]  Paul A. Kolers,et al.  Processing of visible language , 1979 .