Speech-to-speech translation has been studied to realize natural human communication beyond language barriers. Toward further multi-modal natural communication, visual information such as face and lip movements will be necessary. We introduce a multi-modal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion while synchronizing it to the translated speech. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a three-dimensional wire-frame model that is adaptable to any speaker. Our approach enables image synthesis and translation with an extremely small database. We conduct subjective evaluation tests using the connected digit discrimination test using data with and without audio-visual lip-synchronization. The results confirm the significant quality of the proposed audio-visual translation system and the importance of lip-synchronization.
[1]
Hitoshi Kamada,et al.
Temporal electron spin resonance imaging and its application for analysis of the half-life of a nitroxide radical in multiple brain areas of rats after epileptic seizures
,
1999
.
[2]
Shigeo Morishima,et al.
3D Lip Expression Generation by using New Lip Parameters
,
2000
.
[3]
Hitoshi Iida,et al.
A Japanese-to-English speech translation system: ATR-MATRIX
,
1998,
ICSLP.
[4]
E. Vatikiotis-Bateson,et al.
Kinematics-Based Synthesis of Realistic Talking Faces
,
1998,
AVSP.
[5]
Tony Ezzat,et al.
Face analysis for the synthesis of photo-realistic talking heads
,
2000,
Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580).