Dynamic Orometrics: A Computer-Based Means of Learning About and Developing Speech by Deaf Children

Deaf children build their speech concepts primarily on what they see. Some structures such as the lips, jaws, and front of the tongue are readily visible. The actions of these structures in the speech of the deaf are more similar to those of children with normal hearing than are the actions of the less visible or nonvisible structures. The hearing child, however, can listen to the acoustic pattern of what he or she says and compare that with what other speakers say. He or she can then adjust his or her speaking activities to bring the sounds closer and closer to those of other talkers. By this means the child builds an accurate acoustics-to-articulation action schema and masters articulator motor control. In the process of learning to talk the hearing child gains control over the movements needed to talk. The deaf child has only half of the equation: The deaf child can feel his or her movements and can see some of the actions as others talk. A sensory link between the two sets of information is missing. Unseen motor behaviors of speech are inaccessible. The child has little opportunity to master the full range of speech actions. Careful documentation of what deaf children actually do in their efforts to utter specific sounds and their ability to produce nonspeech movements similar to those in speech is lacking. The goal of this research has been to provide a means to study deaf speech and to add the other half of the sensory equation: a visual-vocal linkage that parallels the auditory-vocal channel of hearing children. To meet this goal a computer-based dynamic orometer was developed which (a) documents simultaneous lip and jaw positioning, tongue-palate contacts, tongue shape and positioning with the mouth, and changes in voice frequency and intensity as children talk; (b) studies motor control of the lips, jaws, tongue, and larynx in nonspeech activities that span ranges of movements found in speech; and (c) provides side-by-side video displays showing the actual movements of the articulators as a hearing and a deaf child speak. This provides an efficient and effective dual visual-vocal means for modeling and shaping the speech of deaf children. In this symposium, videotapes will be presented that show the system, demonstrate its use, and portray some of the exciting results attained.

[1]  The Effect of Vowel Environment on Duration of Consonants Produced by Normal-Hearing, Hearing-Impaired and Deaf Adult Speakers. , 1978 .

[2]  S. G. Fletcher Medical electronics II: Seeing speech in real time: The deaf can now view tongue, jaw, and other vocal-tract movements on a CRT display , 1982, IEEE Spectrum.

[3]  R. Daniloff,et al.  A Reply to Coleman and Schliesser’s “Comment on Articulation and Stress/Juncture Production Under Oral Anesthetization and Masking” , 1972 .

[4]  R. Ringel,et al.  A cineradiographic study of articulation in two talkers with temporarily induced oral sensory deprivation. , 1976, Journal of speech and hearing research.

[5]  A. Holbrook,et al.  THE VOWEL FORMANTS OF DEAF AND NORMAL-HEARING ELEVEN- TO FOURTEEN-YEAR-OLD BOYS. , 1964, The Journal of speech and hearing disorders.

[6]  R. Schmidt A schema theory of discrete motor skill learning. , 1975 .

[7]  V. Fromkin Lip Positions in American English Vowels , 1964 .

[8]  R. B. Monsen Second formant transitions of selected consonant-vowel combinations in the speech of deaf and normal-hearing children. , 1976, Journal of speech and hearing research.

[9]  Steven W. Keele,et al.  Movement control in skilled motor performance. , 1968 .

[10]  G. E. Peterson,et al.  Control Methods Used in a Study of the Vowels , 1951 .

[11]  R. Ringel,et al.  Articulation without oral sensory control. , 1971, Journal of speech and hearing research.

[12]  Norman J. Lass,et al.  Speech and Language: Advances in Basic Research and Practice , 1979 .

[13]  N. P. Erber,et al.  Visual perception of speech by deaf children: recent developments and continuing needs. , 1974, The Journal of speech and hearing disorders.

[14]  K. Harris,et al.  Some observations on monosyllable production by deaf speakers and dysarthric speakers. , 1968, American annals of the deaf.