Control of Speech-Related Facial Movements of an Avatar from Video

Several puppetry techniques have been recently proposed to transfer emotional facial expressions to an avatar from a user’s video stream. Correspondence functions between landmarks extracted from tracking and MPEG-4 Facial Animation Parameters driving the 3D avatar’s facial expressions [1] have been proposed. More recently, Saragih and colleagues [2] proposed a real-time puppetry method using only a single image of the avatar and user.