An emotional viseme compiler for facial animation

The animation of a three dimensional synthetic human face has been the object of much research in the past few years. Many systems now exist for this purpose, which rely on the artistic and animation skills of animators. Methods for the generation of lip movements to accompany a speech soundtrack have also been developed. These systems rely on the extraction of phonemes from the speech signal and converting them to "visemes" or visual lip shapes for a synthetic human face. The generation of human emotional expressions has also been developed in the recent past. This paper combines some of these developments to present a system, which is capable of appropriately combining emotional cues automatically with phonemes to generate emotional visual speech on a synthetic human face.