Human motions and emotions recognition inspired by LMA qualities

The purpose of this paper is to describe human motions and emotions that appear on real video images with compact and informative representations. We aimed to recognize expressive motions and analyze the relationship between human body features and emotions. We propose a new descriptor vector for expressive human motions inspired from the Laban movement analysis method (LMA), a descriptive language with an underlying semantics that allows to qualify human motion in its different aspects. The proposed descriptor is fed into a machine learning framework including, random decision forest, multi-layer perceptron and two multiclass support vector machines methods. We evaluated our descriptor first for motion recognition and second for emotion recognition from the analysis of expressive body movements. Preliminary experiments with three public datasets, MSRC-12, MSR Action 3D and UTkinect, showed that our model performs better than many existing motion recognition methods. We also built a dataset composed of 10 control motions (move, turn left, turn right, stop, sit down, wave, dance, introduce yourself, increase velocity, decrease velocity). We tested our descriptor vector and achieved high recognition performance. In the second experimental part, we evaluated our descriptor with a dataset composed of expressive gestures performed with four basic emotions selected from Russell’s Circumplex model of affect (happy, angry, sad and calm). The same machine learning methods were used for human emotions recognition based on expressive motions. A 3D virtual avatar was introduced to reproduce human body motions, and three aspects were analyzed (1) how expressed emotions are classified by humans, (2) how motion descriptor is evaluated by humans, (3) what is the relationship between human emotions and motion features.

[1]  Qunsheng Peng,et al.  Online robust action recognition based on a hierarchical model , 2014, The Visual Computer.

[2]  Cordelia Schmid,et al.  Dense Trajectories and Motion Boundary Descriptors for Action Recognition , 2013, International Journal of Computer Vision.

[3]  R. Laban,et al.  The mastery of movement , 1950 .

[4]  George Hripcsak,et al.  Technical Brief: Agreement, the F-Measure, and Reliability in Information Retrieval , 2005, J. Am. Medical Informatics Assoc..

[5]  Ramón Díaz-Uriarte,et al.  Gene selection and classification of microarray data using random forest , 2006, BMC Bioinformatics.

[6]  B. Gelder,et al.  Why bodies? Twelve reasons for including bodily expressions in affective neuroscience , 2009, Philosophical Transactions of the Royal Society B: Biological Sciences.

[7]  Yale Song,et al.  Distribution-sensitive learning for imbalanced datasets , 2013, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[8]  David P. Dobkin,et al.  The quickhull algorithm for convex hulls , 1996, TOMS.

[9]  Jean-Yves Didier,et al.  Robust human action recognition system using Laban Movement Analysis , 2017, KES.

[10]  Sylvain Arlot,et al.  A survey of cross-validation procedures for model selection , 2009, 0907.4728.

[11]  Gérard G. Medioni,et al.  Structured Time Series Analysis for Human Action Segmentation and Recognition , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Norman I. Badler,et al.  Semantic Segmentation of Motion Capture Using Laban Movement Analysis , 2007, IVA.

[13]  Jake K. Aggarwal,et al.  Spatio-temporal Depth Cuboid Similarity Feature for Activity Recognition Using Depth Camera , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[14]  Norman I. Badler,et al.  Perform: perceptual approach for adding OCEAN personality to human motion using laban movement analysis , 2016, TOGS.

[15]  Jovan Popovic,et al.  Style translation for human motion , 2005, ACM Trans. Graph..

[16]  D. Altman,et al.  Statistics notes: Cronbach's alpha , 1997 .

[17]  Naoyuki Kubota,et al.  Design support system for emotional expression of robot partners using interactive evolutionary computation , 2012, 2012 IEEE International Conference on Fuzzy Systems.

[18]  J. Russell A circumplex model of affect. , 1980 .

[19]  Niloy J. Mitra,et al.  Spectral style transfer for human motion between independent actions , 2016, ACM Trans. Graph..

[20]  Norman I. Badler,et al.  The EMOTE model for effort and shape , 2000, SIGGRAPH.

[21]  Sebastian Nowozin,et al.  Efficient Nonlinear Markov Models for Human Motion , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[22]  Meinard Müller,et al.  Efficient content-based retrieval of motion capture data , 2005, SIGGRAPH '05.

[23]  Nadia Magnenat-Thalmann,et al.  Continuous body emotion recognition system during theater performances , 2016, Comput. Animat. Virtual Worlds.

[24]  J. R. Quinlan Learning With Continuous Classes , 1992 .

[25]  Shlomo Bentin,et al.  The automaticity of emotional face-context integration. , 2011, Emotion.

[26]  Titus B. Zaharia,et al.  Laban descriptors for gesture recognition and emotional analysis , 2015, The Visual Computer.

[27]  Hazem Wannous,et al.  Grassmannian Representation of Motion Depth for 3D Human Gesture and Action Recognition , 2014, 2014 22nd International Conference on Pattern Recognition.

[28]  Titus B. Zaharia,et al.  Dynamic Gesture Recognition with Laban Movement Analysis and Hidden Markov Models , 2016, CGI.

[29]  Reid G. Simmons,et al.  Expressive path shape (swagger): Simple features that illustrate a robot's attitude toward its goal in real time , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[30]  Zicheng Liu,et al.  HON4D: Histogram of Oriented 4D Normals for Activity Recognition from Depth Sequences , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[31]  Jessica K. Hodgins,et al.  Realtime style transfer for unlabeled heterogeneous human motion , 2015, ACM Trans. Graph..

[32]  Daniel Cohen-Or,et al.  Emotion control of unstructured dance movements , 2017, Symposium on Computer Animation.

[33]  Andreas Aristidou,et al.  Emotion Analysis and Classification: Understanding the Performers' Emotions Using the LMA Entities , 2015, Comput. Graph. Forum.

[34]  Wei-Yin Loh,et al.  Classification and regression trees , 2011, WIREs Data Mining Knowl. Discov..

[35]  Antonio Camurri,et al.  Toward a Minimal Representation of Affective Gestures , 2011, IEEE Transactions on Affective Computing.

[36]  Jake K. Aggarwal,et al.  View invariant human action recognition using histograms of 3D joints , 2012, 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[37]  J. Russell Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. , 1994, Psychological bulletin.

[38]  Jean-Yves Didier,et al.  Gesture recognition for humanoid robot teleoperation , 2017, 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[39]  Helena M. Mentis,et al.  Instructing people for training gestural interactive systems , 2012, CHI.

[40]  Dana Kulic,et al.  Laban Effort and Shape Analysis of Affective Hand and Arm Movements , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.

[41]  Imran N. Junejo,et al.  Silhouette-based human action recognition using SAX-Shapes , 2014, The Visual Computer.

[42]  Tolga K. Çapin,et al.  Classification of human motion based on affective state descriptors , 2013, Comput. Animat. Virtual Worlds.

[43]  Norman I. Badler,et al.  Efficient motion retrieval in large motion databases , 2013, I3D '13.

[44]  George Papagiannakis,et al.  Style-based motion analysis for dance composition , 2017, The Visual Computer.

[45]  Wanqing Li,et al.  Action recognition based on a bag of 3D points , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops.

[46]  Ran R. Hassin,et al.  Angry, Disgusted, or Afraid? , 2008, Psychological science.