An animated display of tongue, lip and jaw movements during speech: A proper basis for speech aids to the handicapped and other speech technologies

The authors have developed a method for inferring articulatory parameters from acoustics. For this method, an X-ray microbeam records the movements of the lower lip, tongue tip and tongue dorsum during normal speech. A neural network is then trained to map from concurrently recorded acoustic data to the articulatory data. The device has applications in speech therapy as a lip-reading aid, and as a basis for other speech technologies including speech and speaker recognition and low data-rate speech transmission.<<ETX>>