SYNFACE - a talking face telephone

The SYNFACE project has as its primary goal to facilitate for hearing-impaired people to use an ordinary telephone. This will be achieved by using a talking face connected to the telephone. The incoming speech signal will govern the speech movements of the talking face, hence the talking face will provide lip-reading support for the user. The project will define the visual speech information that supports lip-reading, and develop techniques to derive this information from the acoustic speech signal in near real time for three different languages: Dutch, English and Swedish. This requires the development of automatic speech recognition methods that detect information in the acoustic signal that correlates with the speech movements. This information will govern the speech movements in a synthetic face and synchronise them with the acoustic speech signal. A prototype system is being constructed. The prototype contains results achieved so far in SYNFACE. This system will be tested and evaluated for the three languages by hearing-impaired users. SYNFACE is an IST project (IST-2001-33327) with partners from the Netherlands, UK and Sweden. SYNFACE builds on experiences gained in the Swedish Teleface project.

[1]  A. M. Mimpen,et al.  Improving the reliability of testing the speech reception threshold for sentences. , 1979, Audiology : official organ of the International Society of Audiology.

[2]  John Bamford,et al.  Speech-hearing tests and the spoken language of hearing-impaired children , 1979 .

[3]  N. M. Brooke,et al.  Analysis, synthesis, and perception of visible articulatory movements , 1983 .

[4]  Nobuhiko Kitawaki,et al.  Pure Delay Effects on Speech Quality in Telecommunications , 1991, IEEE J. Sel. Areas Commun..

[5]  Björn Granström,et al.  The teleface project multi-modal speech-communication for the hearing impaired , 1997, EUROSPEECH.

[6]  Björn Granström,et al.  Synthetic faces as a lipreading support , 1998, ICSLP.

[7]  Jonas Beskow,et al.  Evaluation of a multilingual synthetic talking face as a communication aid for the hearing impaired , 2002 .

[8]  Klara Ward User interface for the Synface project , 2002 .

[9]  Björn Granström,et al.  Experiment with asynchrony in multimodal speech communication , 2003 .

[10]  J. Beskow Talking Heads - Models and Applications for Multimodal Speech Synthesis , 2003 .

[11]  Giampiero Salvi Truncation error and dynamics in very low latency phonetic recognition , 2003, NOLISP.

[12]  Andrew Faulkner,et al.  Lipreadability of a synthetic talking face in normal hearing and hearing-impaired listeners , 2003, AVSP.

[13]  Björn Granström,et al.  Resynthesis of Facial and Intraoral Articulation fromSimultaneous Measurements , 2003 .