Towards multimodal affective expression:merging facial expressions and body motion into emotion

Affect recognition plays an important role in human everyday life and it is a substantial way of communication through expressions. Humans can rely on different channels of information to understand the affective messages communicated with others. Similarly, it is expected that an automatic affect recognition system should be able to analyse different types of emotion expressions. In this respect, an important issue to be addressed is the fusion of different channels of expression, taking into account the relationship and correlation across different modalities. In this work, affective facial and bodily motion expressions are addressed as channels for the communication of affect, designed as an emotion recognition system. A probabilistic approach is used to combine features from two modalities by incorporating geometric facial expression features and body motion skeleton-based features. Preliminary results show that the presented approach has potential for automatic emotion recognition and it can be used for human robot interaction.

[1]  Cristiano Premebida,et al.  A probabilistic approach for human everyday activities recognition using body motion from RGB-D images , 2014, The 23rd IEEE International Symposium on Robot and Human Interactive Communication.

[2]  S. Kullback,et al.  Information Theory and Statistics , 1959 .

[3]  Urbano Nunes,et al.  Real-time Application for Monitoring Human Daily Activity and Risk Situations in Robot-Assisted Living , 2015, ROBOT.

[4]  Cristiano Premebida,et al.  Probabilistic human daily activity recognition towards robot-assisted living , 2015, 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[5]  Cristiano Premebida,et al.  Applying probabilistic Mixture Models to semantic place classification in mobile robotics , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[6]  Mario Vieira,et al.  Towards the Development of Affective Facial Expression Recognition for Human-Robot Interaction , 2017, PETRA.

[7]  Davis E. King,et al.  Dlib-ml: A Machine Learning Toolkit , 2009, J. Mach. Learn. Res..

[8]  D. Lundqvist,et al.  Karolinska Directed Emotional Faces , 2015 .

[9]  Cristiano Premebida,et al.  Affective facial expressions recognition for human-robot interaction , 2017, 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[10]  A. J. Fridlund IS THERE UNIVERSAL RECOGNITION OF EMOTION FROM FACIAL EXPRESSION? A REVIEW OF THE CROSS-CULTURAL STUDIES , 1994 .

[11]  Nuno M. Fonseca Ferreira,et al.  Combining discriminative spatiotemporal features for daily life activity recognition using wearable motion sensing suit , 2017, Pattern Analysis and Applications.

[12]  Ido Dagan,et al.  Similarity-based methods for word sense disambiguation , 1997 .

[13]  Cristiano Premebida,et al.  Dynamic Bayesian network for semantic place classification in mobile robotics , 2017, Auton. Robots.

[14]  Rosalind W. Picard Toward Agents that Recognize Emotion , 1998 .