On Automatically Assessing Children's Facial Expressions Quality: A Study, Database, and Protocol

While there exists a number of serious games geared towards helping children with ASD to produce facial expressions, most of them fail to provide a precise feedback to help children to adequately learn. In the scope of the JEMImE project, which aims at developing such serious game platform, we introduce throughout this paper a machine learning approach for discriminating between facial expressions and assessing the quality of the emotional display. In particular, we point out the limits in generalization capacities of models trained on adult subjects. To circumvent this issue in the design of our system, we gather a large database depicting children's facial expressions to train and validate the models. We describe our protocol to elicit facial expressions and obtain quality annotations, and empirically show that our models obtain high accuracies in both classification and quality assessment of children's facial expressions. Furthermore, we provide some insight on what the models learn and which features are the most useful to discriminate between the various facial expressions classes and qualities. This new model trained on the dedicated dataset has been integrated into a proof of concept of the serious game. Keywords: Facial Expression Recognition, Expression quality, Random Forests, Emotion, Children, Dataset

[1]  Björn W. Schuller,et al.  Efficient Recognition of Authentic Dynamic Facial Expressions on the Feedtum Database , 2006, 2006 IEEE International Conference on Multimedia and Expo.

[2]  E. Leibenluft,et al.  The NIMH Child Emotional Faces Picture Set (NIMH‐ChEFS): a new set of children's facial emotion stimuli , 2011, International journal of methods in psychiatric research.

[3]  Chao Chen,et al.  Using Random Forest to Learn Imbalanced Data , 2004 .

[4]  P. Ekman,et al.  Constants across cultures in the face and emotion. , 1971, Journal of personality and social psychology.

[5]  Skyler T. Hawk,et al.  Presentation and validation of the Radboud Faces Database , 2010 .

[6]  Takeo Kanade,et al.  The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[7]  C. Izard,et al.  Emotional intelligence or adaptive emotions? , 2001, Emotion.

[8]  Tiago Fernandes,et al.  LIFEisGAME: A Facial Character Animation System to Help Recognize Facial Expressions , 2011, CENTERIS.

[9]  Séverine Dubuisson,et al.  Pairwise Conditional Random Forests for Facial Expression Recognition , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[10]  Saïda Bouakaz,et al.  A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE) , 2018, Image Vis. Comput..

[11]  Vanessa Lobue,et al.  The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults , 2014, Front. Psychol..

[12]  Lijun Yin,et al.  A high-resolution 3D dynamic facial expression database , 2008, 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition.

[13]  Séverine Dubuisson,et al.  Multi-Output Random Forests for Facial Action Unit Detection , 2017, 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017).

[14]  S. Spence,et al.  Social Skills Training with Children and Young People: Theory, Evidence and Practice. , 2003, Child and adolescent mental health.

[15]  Chek Tien Tan,et al.  Can you CopyMe?: an expression mimicking serious game , 2013, SIGGRAPH ASIA Mobile Graphics and Interactive Applications.

[16]  Pietro Perona,et al.  Integral Channel Features , 2009, BMVC.

[17]  B. Mesquita,et al.  Context in Emotion Perception , 2011 .

[18]  Henrik Linusson,et al.  MULTI-OUTPUT RANDOM FORESTS , 2013 .

[19]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[20]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[21]  Bretagne Abirached,et al.  A framework for designing assistive technologies for teaching children with ASDs emotions , 2012, CHI Extended Abstracts.

[22]  Tom Bylander,et al.  Estimating Generalization Error on Two-Class Datasets Using Out-of-Bag Estimates , 2002, Machine Learning.

[23]  Fernando De la Torre,et al.  Supervised Descent Method and Its Applications to Face Alignment , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[24]  Liming Chen,et al.  JEMImE: A Serious Game to Teach Children with ASD How to Adequately Produce Facial Expressions , 2018, 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).

[25]  J. Movellan,et al.  SmileMaze: A Tutoring System in Real-Time Facial Expression Perception and Production in Children with Autism Spectrum Disorder , 2008 .

[26]  Louis-Philippe Morency,et al.  EmoReact: a multimodal approach and dataset for recognizing emotional responses in children , 2016, ICMI.

[27]  Kirsten A. Dalrymple,et al.  The Dartmouth Database of Children’s Faces: Acquisition and Validation of a New Face Stimulus Set , 2013, PloS one.