Boxing the face: A comparison of dynamic facial databases used in facial analysis and animation

Facial animation is a difficult task that is based on an approximation of subtle facial movements [Trutoiu et al. 2014], and that needs to be well grounded in real life dynamic facial behaviour to be convincing. Yet while the endpoints of expressions in still images can be defined relatively precisely using the Facial Action Coding System FACS [Ekman et al. 2002], the design of facial dynamics requires additional highresolution data (e.g., [Trutoiu et al. 2014]). This is particularly the case for the creation of more naturalistic expressions, i.e., everyday patterns of “partial” and “mixed” movements that depart from simplistic assumptions of omnipresent stereotypic expressions of “basic” patterns of Action Units (AUs). However, for animation designers who do not have the resources to elicit, record, and validate such expressions, the question arises which of the extant and freely available dynamic facial databases might best serve this purpose. One of the most important steps in this decision process is the selection of technically adequate databases that offer sufficient resolution of the face and expressions to allow adequate modelling.

[1]  Jessica K. Hodgins,et al.  Spatial and Temporal Linearities in Posed and Spontaneous Smiles , 2014, ACM Trans. Appl. Percept..

[2]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.