Generate Individually Optimized Blendshapes

Blendshape based animation is a technique commonly used to animate the face. To generate realistic animation for different faces, creating appropriate blendshapes for each different face is essential. There have been many attempts to find the production-level blend shapes. However, many existing methods mostly require a professional artist’s intuition with manual intervention. In this paper, we present a novel approach to automatically generate individually optimized blendshapes from real-time captured facial expressions. The proposed method generates the blendshape from the captured face with two methods: linear regression and an autoencoder. Among results from two methods, we select the trained result that is more similar to the original face. The adopted blendshape could be used to animate the original face more naturally. In addition, the generated blendshape is utilized to retarget the original face animation to another face while preserving the original face’s animation characteristic. Comparison of results by animating the face on the screen show linear regression is suitable for retargeting the facial expressions without using the complicated neural networks.

[1]  Ken-ichi Anjyo,et al.  Practice and Theory of Blendshape Facial Models , 2014, Eurographics.

[2]  Shiguang Shan,et al.  Stacked Progressive Auto-Encoders (SPAE) for Face Recognition Across Poses , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[3]  Sanghoon Lee,et al.  A Greedy Pursuit Approach for Fitting 3D Facial Expression Models , 2020, IEEE Access.

[4]  Richard E. Parent,et al.  Computer animation - algorithms and techniques , 2012 .

[5]  Rachel McDonnell,et al.  Facial retargeting using neural networks , 2014, MIG.

[6]  Jaakko Lehtinen,et al.  Production-level facial performance capture using deep convolutional neural networks , 2016, Symposium on Computer Animation.

[7]  David Salesin,et al.  Resynthesizing facial animation through 3D model-based tracking , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[8]  Patrick Pérez,et al.  MoFA: Model-Based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[9]  Mathieu Desbrun,et al.  Learning controls for blend shape based realistic facial animation , 2003, SIGGRAPH '03.

[10]  Matthew Turk,et al.  A Morphable Model For The Synthesis Of 3D Faces , 1999, SIGGRAPH.

[11]  Lance Williams,et al.  Performance-driven facial animation , 1990, SIGGRAPH.

[12]  Zhigang Deng,et al.  Orthogonal-Blendshape-Based Editing System for Facial Motion Capture Data , 2008, IEEE Computer Graphics and Applications.

[13]  Jovan Popovic,et al.  Deformation transfer for triangle meshes , 2004, ACM Trans. Graph..

[14]  Juyong Zhang,et al.  Facial Expression Retargeting From Human to Avatar Made Easy , 2020, IEEE Transactions on Visualization and Computer Graphics.

[15]  John P. Lewis,et al.  Compression and direct manipulation of complex blendshape models , 2011, ACM Trans. Graph..

[16]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[17]  John P. Lewis,et al.  Facial motion retargeting , 2006, SIGGRAPH Courses.

[18]  Frederick I. Parke,et al.  Computer generated animation of faces , 1972, ACM Annual Conference.

[19]  Baoyuan Wang,et al.  Joint Face Detection and Facial Motion Retargeting for Multiple Faces , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  John P. Lewis,et al.  Facial retargeting with automatic range of motion alignment , 2017, ACM Trans. Graph..

[21]  Ken-ichi Anjyo,et al.  Spacetime expression cloning for blendshapes , 2012, TOGS.

[22]  Keith Waters,et al.  Computer facial animation , 1996 .

[23]  Dacheng Tao,et al.  Robust Face Recognition via Multimodal Deep Face Representation , 2015, IEEE Transactions on Multimedia.

[24]  Frederic I. Parke,et al.  A parametric model for human faces. , 1974 .