Seeing tongue movements from outside

A highly durable sheet material using for synthetic leather is prepared by dipping a layer of a solution of a polyurethane elastomer which is derived from both a polyester having a molecular weight of 800 to 3000 and a polyether having a molecular weight of 800 to 3000 as soft segments, the molar ratio of the polyester to the polyether being from 10:90 to 65:35, and in which elastomer the weight of nitrogen atoms contained in the urethane groups ranges from 3.8 to 6 percent of the total weight of the polyurethane elastomer, into a coagulation bath composed of a solvent and a non-solvent for the polyurethane elastomer at a ratio by weight ranging from 20:80 to 70:30, thereby coagulating said layer of the elastomer into a microporous structure.

[1]  Takaaki Kuratate,et al.  Audio-visual synthesis of talking faces from speech production correlates. , 1999 .

[2]  R. Orin Cornett,et al.  The Cued Speech Resource Book for Parents of Deaf Children , 1992 .

[3]  Pierre Badin,et al.  Determining tongue articulation: from discrete fleshpoints to continuous shadow , 1997, EUROSPEECH.

[4]  Hani Yehia,et al.  Quantitative association of vocal-tract and facial behavior , 1998, Speech Commun..

[5]  Lionel Revéret,et al.  A New 3D Lip Model for Analysis and Synthesis of Lip Motion In Speech Production , 1998, AVSP.

[6]  N. P. Erber,et al.  Auditory-visual perception of speech with reduced optical clarity. , 1979, Journal of speech and hearing research.

[7]  H. McGurk,et al.  Hearing lips and seeing voices , 1976, Nature.

[8]  Gérard Bailly,et al.  A three-dimensional linear articulatory model based on MRI data , 1998, ICSLP.

[9]  Abeer Alwan,et al.  On the correlation between facial movements, tongue movements and speech acoustics , 2000, INTERSPEECH.

[10]  Gérard Bailly,et al.  TOWARDS AN AUDIOVISUAL VIRTUAL TALKING HEAD: 3D ARTICULATORY MODELING OF TONGUE, LIPS AND FACE BASED ON MRI AND VIDEO IMAGES , 1998 .

[11]  C. Stoel-Gammon,et al.  Prelinguistic vocalizations of hearing-impaired and normally hearing subjects: a comparison of consonantal inventories. , 1988, The Journal of speech and hearing disorders.

[12]  J Robert-Ribes,et al.  Complementarity and synergy in bimodal speech: auditory, visual, and audio-visual identification of French oral vowels in noise. , 1998, The Journal of the Acoustical Society of America.

[13]  C M Reed,et al.  Analytic study of the Tadoma method: improving performance through the use of supplementary tactual displays. , 1992, Journal of speech and hearing research.

[14]  W. H. Sumby,et al.  Visual contribution to speech intelligibility in noise , 1954 .

[15]  Gérard Bailly,et al.  Three-dimensional linear articulatory modeling of tongue, lips and face, based on MRI and video images , 2002, J. Phonetics.