OpenFACS: An Open Source FACS-Based 3D Face Animation System

We present OpenFACS, an open source FACS-based 3D face animation system. OpenFACS is a software that allows the simulation of realistic facial expressions through the manipulation of specific action units as defined in the Facial Action Coding System. OpenFACS has been developed together with an API which is suitable to generate real-time dynamic facial expressions for a three-dimensional character. It can be easily embedded in existing systems without any prior experience in computer graphics. In this note, we discuss the adopted face model, the implemented architecture and provide additional details of model dynamics. Finally, a validation experiment is proposed to assess the effectiveness of the model.

[1]  Sergi Villagrasa,et al.  FACe! 3D Facial Animation System based on FACS , 2009 .

[2]  Algirdas Pakstas,et al.  MPEG-4 Facial Animation: The Standard,Implementation and Applications , 2002 .

[3]  M. Mori THE UNCANNY VALLEY , 2020, The Monster Theory Reader.

[4]  Keith Waters,et al.  Computer facial animation , 1996 .

[5]  C. Darwin The Expression of the Emotions in Man and Animals , .

[6]  Peter Robinson,et al.  Cross-dataset learning and person-specific normalisation for automatic Action Unit detection , 2015, 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[7]  K. Scherer,et al.  Psychophysics of emotion: the QUEST for emotional attention. , 2010, Journal of vision.

[8]  Dirk Heylen,et al.  Bridging the Gap between Social Animal and Unsocial Machine: A Survey of Social Signal Processing , 2012, IEEE Transactions on Affective Computing.

[9]  Raffaella Lanzarotti,et al.  The Color of Smiling: Computational Synaesthesia of Facial Expressions , 2015, ICIAP.

[10]  Giuliano Grossi,et al.  A Note on Modelling a Somatic Motor Space for Affective Facial Expressions , 2017, ICIAP Workshops.

[11]  Isabel Urdapilleta,et al.  FACSHuman a Software to Create Experimental Material by Modeling 3D Facial Expression , 2018, IVA.

[12]  Derek Bradley,et al.  Recent Advances in Facial Appearance Capture , 2015, Comput. Graph. Forum.

[13]  H. Ishiguro,et al.  The uncanny advantage of using androids in cognitive and social science research , 2006 .

[14]  Christine L. Lisetti,et al.  HapFACS 3.0: FACS-Based Facial Expression Generator for 3D Speaking Virtual Characters , 2015, IEEE Transactions on Affective Computing.

[15]  Steric Effects of Silyl Groups , 2009 .

[16]  Giuliano Grossi,et al.  Deep Construction of an Affective Latent Space via Multimodal Enactment , 2018, IEEE Transactions on Cognitive and Developmental Systems.

[17]  Giuliano Grossi,et al.  Taking the Hidden Route: Deep Mapping of Affect via 3D Neural Networks , 2017, ICIAP Workshops.

[18]  J. Kleinman,et al.  Speaking, thinking, and blinking , 1981, Psychiatry Research.

[19]  Elisabeth André,et al.  Simplified facial animation control utilizing novel input devices: a comparative study , 2009, IUI.

[20]  H. Schiffman Sensation and Perception: An Integrated Approach , 1976 .

[21]  Giuliano Grossi,et al.  Predictive Sampling of Facial Expression Dynamics Driven by a Latent Action Space , 2018, 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS).

[22]  Keith Waters,et al.  Computer Facial Animation, Second Edition , 1996 .

[23]  P. Ekman,et al.  Constants across cultures in the face and emotion. , 1971, Journal of personality and social psychology.

[24]  Eva G Krumhuber,et al.  FACSGen 2.0 animation software: generating three-dimensional FACS-valid facial expressions for emotion research. , 2012, Emotion.