Affect Recognition Using Magnitude Models of Motion

The analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, neuroscience, and related disciplines. We focus on the recognition of the affect state of a single person from video streams. We create a model that allows to estimate the state of four affective dimensions of a person which are arousal, anticipation, power and valence. This sequence model is composed of a magnitude model of motion constructed from a set of point of interest tracked using optical flow. The state of the affective dimension is then predicted using SVM. The experimentation has been performed on a standard dataset and has showed promising results.

[1]  Rafael A. Calvo,et al.  Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications , 2010, IEEE Transactions on Affective Computing.

[2]  Björn W. Schuller,et al.  AVEC 2011-The First International Audio/Visual Emotion Challenge , 2011, ACII.

[3]  Maja Pantic,et al.  The SEMAINE corpus of emotionally coloured character interactions , 2010, 2010 IEEE International Conference on Multimedia and Expo.

[4]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[5]  Hatice Gunes,et al.  Continuous Prediction of Spontaneous Affect from Multiple Cues and Modalities in Valence-Arousal Space , 2011, IEEE Transactions on Affective Computing.

[6]  K. Scherer,et al.  The World of Emotions is not Two-Dimensional , 2007, Psychological science.

[7]  Chabane Djeraba,et al.  Human Action Recognition using Direction and Magnitude Models of Motion , 2011, VISAPP.

[8]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Zhihong Zeng,et al.  A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions , 2009, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Hatice Gunes,et al.  Automatic Temporal Segment Detection and Affect Recognition From Face and Body Display , 2009, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[11]  Qingshan Liu,et al.  Recognizing expressions from face and body gesture by temporal normalized motion and appearance features , 2013, Image Vis. Comput..