CoArt: coarticulation region analysis for control of 2D characters

A facial analysis-synthesis framework based on a concise set of local, independently actuated, coarticulation regions (CRs) is presented for the control of 2D animated characters. CRs are parameterized by muscle actuations and thereby provide a physically meaningful description of face state that is easily abstracted to higher-level descriptions of facial expression. An independent component analysis on a set of training images acquired from an actor is used to characterize the appearance space of each CR. Within this framework actor-independent face reconstruction databases can be created by an artist or extracted from video sequences. In addition, the muscle parameter values may be used to drive any similarly parameterized 3D facial model. The flexibility afforded by such a methodology is demonstrated with applications to 2D facial animation control and sample based video synthesis. The analysis runs in real-time on modest consumer hardware.

[1]  Henrique S. Malvar,et al.  Making Faces , 2019, Topoi.

[2]  Tomaso A. Poggio,et al.  Linear Object Classes and Image Synthesis From a Single Example Image , 1997, IEEE Trans. Pattern Anal. Mach. Intell..

[3]  W. Welsh,et al.  Facial-feature image coding using principal components , 1992 .

[4]  Marian Stewart Bartlett,et al.  Classifying Facial Actions , 1999, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Aapo Hyvärinen,et al.  Fast and robust fixed-point algorithms for independent component analysis , 1999, IEEE Trans. Neural Networks.

[6]  Marion Kee,et al.  Analysis , 2004, Machine Translation.

[7]  M. Turk,et al.  Eigenfaces for Recognition , 1991, Journal of Cognitive Neuroscience.

[8]  Dimitris N. Metaxas,et al.  Deformable model-based shape and motion analysis from images using motion residual error , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[9]  Christoph Bregler,et al.  Video Rewrite: Driving Visual Speech with Audio , 1997, SIGGRAPH.

[10]  Irfan Essa,et al.  Analysis, interpretation and synthesis of facial expressions , 1995 .

[11]  Ronald Chung,et al.  Facial expression recognition approach for performance animation , 2001, Proceedings Second International Workshop on Digital and Computational Video.

[12]  Kenji Mase,et al.  Recognition of Facial Expression from Optical Flow , 1991 .

[13]  Takeo Kanade,et al.  Subtly different facial expression recognition and expression intensity estimation , 1998, Proceedings. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No.98CB36231).

[14]  Demetri Terzopoulos,et al.  Analysis of facial images using physical and anatomical models , 1990, [1990] Proceedings Third International Conference on Computer Vision.

[15]  Hans Peter Graf,et al.  Photo-Realistic Talking-Heads from Image Samples , 2000, IEEE Trans. Multim..

[16]  Peter Eisert,et al.  Analyzing Facial Expressions for Virtual Conferencing , 1998, IEEE Computer Graphics and Applications.

[17]  Matthew Brand,et al.  Voice puppetry , 1999, SIGGRAPH.

[18]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[19]  Christoph von der Malsburg,et al.  Analysis and synthesis of human faces with pose variations by a parametric piecewise linear subspace method , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[20]  Jun-yong Noh,et al.  Expression cloning , 2001, SIGGRAPH 2001.

[21]  Tomaso Poggio,et al.  Example Based Image Analysis and Synthesis , 1993 .

[22]  Irfan Essa,et al.  Visual Coding and Tracking of Speech Related Facial Motion , 2001, CVPR 2001.

[23]  J. Cohn,et al.  Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding. , 1999, Psychophysiology.