Emotion-Preserving Blendshape Update With Real-Time Face Tracking

Blendshape representations are widely used in facial animation. Consistent semantics must be maintained for all the blendshapes to build the blendshapes of one character. However, this is difficult for real characters because the face shape of the same semantics varies significantly across identities. Previous studies have handled this issue by asking users to perform a set of predefined expressions with specified semantics. We observe that facial emotions can be used to define semantics. Herein, we propose a real-time technique that directly updates blendshapes without predefined expressions. Its aim is to preserve semantics based on the emotion information extracted from an arbitrary facial motion sequence. In addition, we have designed corresponding algorithms to efficiently update blendshapes with large- and middle-scale face shapes and fine-scale facial details, such as wrinkles, in a real-time face tracking system. The experimental results indicate that using a commodity RGBD sensor, we can achieve real-time online blendshape updates with well-preserved semantics and user-specific facial features and details.