Continuous body emotion recognition system during theater performances

Understanding emotional human behavior in its multimodal and continuous aspect is necessary for studying human machine interaction and creating constituent social agents. As a first step, we propose a system for continuous emotional behavior recognition expressed by people during communication based on their gesture and their whole body dynamical motion. The features used to classify the motion are inspired by the Laban Movement Analysis entities and are mapped onto the well‐known Russell Circumplex Model . We choose a specific case study that corresponds to an ideal case of multimodal behavior that emphasizes the body motion expression: theater performance. Using a trained neural network and annotated data, our system is able to describe the motion behavior as trajectories on the Russell Circumplex Model diagram during theater performances over time. This work contributes to the understanding of human behavior and expression and is a first step through a complete continuous emotion recognition system whose next step will be adding facial expressions. Copyright © 2016 John Wiley & Sons, Ltd.

[1]  Megumi Masuda,et al.  Emotion Detection from Body Motion of Human Form Robot Based on Laban Movement Analysis , 2009, PRIMA.

[2]  Andrew Ortony,et al.  The Cognitive Structure of Emotions , 1988 .

[3]  Norman I. Badler,et al.  Acquiring and validating motion qualities from live limb gestures , 2005, Graph. Model..

[4]  Lorenzo Torresani,et al.  Learning Motion Style Synthesis from Perceptual Observations , 2006, NIPS.

[5]  Luis M. Camarinha-Matos,et al.  Emerging Trends in Technological Innovation, First IFIP WG 5.5/SOCOLNET Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2010, Costa de Caparica, Portugal, February 22-24, 2010. Proceedings , 2010, DoCEIS.

[6]  Katherine B. Martin,et al.  Facial Action Coding System , 2015 .

[7]  R. Laban,et al.  The mastery of movement , 1950 .

[8]  Nikolaos Grammalidis,et al.  Dance analysis using multiple Kinect sensors , 2016, 2014 International Conference on Computer Vision Theory and Applications (VISAPP).

[9]  Ehsan Lotfi,et al.  Practical emotional neural networks , 2014, Neural Networks.

[10]  Kristina Höök,et al.  Designing Gestures for Affective Input: An Analysis of Shape, Effort and Valence , 2003 .

[11]  I. Bartenieff,et al.  Body Movement: Coping with the Environment , 1980 .

[12]  J. Russell A circumplex model of affect. , 1980 .

[13]  Catherine Pelachaud,et al.  Relevant body cues for the classification of emotional body expression in daily actions , 2015, 2015 International Conference on Affective Computing and Intelligent Interaction (ACII).

[14]  Andreas Aristidou,et al.  Emotion Analysis and Classification: Understanding the Performers' Emotions Using the LMA Entities , 2015, Comput. Graph. Forum.

[15]  Michael Neff,et al.  A Perceptual Study of the Relationship between Posture and Gesture for Virtual Characters , 2012, MIG.

[16]  Yoshihiro Okada,et al.  IEC-Based Motion Retrieval System Using Laban Movement Analysis , 2010, KES.

[17]  D. Levine Neural network modeling of emotion , 2007 .

[18]  A. Mehrabian Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in Temperament , 1996 .

[19]  Andreas Aristidou,et al.  Motion indexing of different emotional states using LMA components , 2013, SIGGRAPH ASIA Technical Briefs.

[20]  Megumi Masuda,et al.  Motion rendering system for emotion expression of human form robots based on Laban movement analysis , 2010, 19th International Symposium in Robot and Human Interactive Communication.

[21]  Norman I. Badler,et al.  The EMOTE model for effort and shape , 2000, SIGGRAPH.

[22]  Norman I. Badler,et al.  Segmenting motion capture data using a qualitative analysis , 2015, MIG.

[23]  Björn W. Schuller,et al.  AVEC 2013: the continuous audio/visual emotion and depression recognition challenge , 2013, AVEC@ACM Multimedia.

[24]  Maurizio Mancini,et al.  Implementing Expressive Gesture Synthesis for Embodied Conversational Agents , 2005, Gesture Workshop.

[25]  R. Fugo,et al.  Building character , 2006, Annals of ophthalmology.

[26]  J. Russell Core affect and the psychological construction of emotion. , 2003, Psychological review.

[27]  Myung Jin Chung,et al.  LMA based emotional motion representation using RGB-D camera , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[28]  Jean-Claude Martin,et al.  Gesture and emotion: Can basic gestural form features discriminate emotions? , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.

[29]  Andreas Aristidou,et al.  Emotion Recognition for Exergames using Laban Movement Analysis , 2013, MIG.

[30]  Atsushi Nakazawa,et al.  Dancing‐to‐Music Character Animation , 2006, Comput. Graph. Forum.

[31]  Jorge Dias,et al.  Laban Movement Analysis towards Behavior Patterns , 2010, DoCEIS.

[32]  Titus B. Zaharia,et al.  Laban descriptors for gesture recognition and emotional analysis , 2015, The Visual Computer.

[33]  Thomas Schmickl,et al.  EMANN - a model of emotions in an artificial neural network , 2013, ECAL.

[34]  Konstantin Stanislavski,et al.  An Actor's Work: A Student's Diary , 2008 .

[35]  Yukari Nagai,et al.  Relations between Body Motion and Emotion: Analysis based on Laban Movement Analysis , 2013, CogSci.

[36]  Taketoshi Mori,et al.  Quantitative Analysis of Impression of Robot Bodily Expression based on Laban Movement Theory , 2001 .