Natural Human-Computer Interface requires integration of realistic audio and visual information for perception and display. An example of such an interface is an animated talking head displayed on the computer screen in the form of a human-like computer agent. This system converts text to acoustic speech with synchronized animation of mouth movements. The talking head is based on a generic 3D human head model, but to improve realism, natural looking personalized models are necessary. In this paper we report results in adapting a generic head model to 3D range data of a human head obtained from a 3D laser range scanner. This personalized model is incorporated into the talking head system. With texture mapping, the personalized model offers a more natural and realistic look than the generic model.
[1]
Akikazu Takeuchi,et al.
Speech Dialogue With Facial Displays: Multimodal Human-Computer Conversation
,
1994,
ACL.
[2]
Michael M. Cohen,et al.
Modeling Coarticulation in Synthetic Visual Speech
,
1993
.
[3]
Daniel T. Ling,et al.
Planning-based control of interface animation
,
1995,
CHI '95.
[4]
Parke,et al.
Parameterized Models for Facial Animation
,
1982,
IEEE Computer Graphics and Applications.