Facial animation system for embedded application

This paper describes a prototype implementation of a speech driven facial animation system for embedded devices. The system is comprised of speech recognition and talking head synthesis. A context-based visubsyllable database is set up to map Chinese initials or finals to their corresponding pronunciation mouth shape. With the database, 3D facial animation can be synthesized based on speech signal input. Experiment results show the system works well in simulating real mouth shapes and forwarding a friendly interface in communication terminals.