A Multimodal System for Nonverbal Human Feature Recognition in Emotional Framework

A correct recognition of nonverbal expressions is currently one of the most important challenges of research in the field of human computer interaction. The ability to recognize human actions could change the way to interact with machines in several environments and contexts, or even the way to live. In this paper, we describe the advances of a previous study finalized to design, implement and validate an innovative recognition system already developed by some of the authors. It was aimed at recognizing two opposite emotional conditions (resonance and dissonance) of a candidate to a job position interacting with the recruiter during a job interview. Results in terms of the accuracy, resonance rate, and dissonance rate of the three new optimized neural network-based (NN) classifiers are discussed. Comparison with previous results of three NN classifiers is also presented based on three single domains: facial, vocal and gestural.

[1]  Vitoantonio Bevilacqua,et al.  Evaluation of Resonance in Staff Selection through Multimedia Contents , 2014, ICIC.

[2]  Vitoantonio Bevilacqua,et al.  A new tool for gestural action recognition to support decisions in emotional framework , 2014, 2014 IEEE International Symposium on Innovations in Intelligent Systems and Applications (INISTA) Proceedings.

[3]  Athanasios Katsamanis,et al.  Tracking continuous emotional trends of participants during affective dyadic interactions using body language and speech information , 2013, Image Vis. Comput..

[4]  M. D. Meijer The contribution of general features of body movement to the attribution of emotions , 1989 .

[5]  Isaac Cohen,et al.  Inference of human postures by classification of 3D human body shape , 2003, 2003 IEEE International SOI Conference. Proceedings (Cat. No.03CH37443).

[6]  Paul A. Viola,et al.  Rapid object detection using a boosted cascade of simple features , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[7]  Vitoantonio Bevilacqua,et al.  A new tool to support diagnosis of neurological disorders by means of facial expressions , 2011, 2011 IEEE International Symposium on Medical Measurements and Applications.

[8]  Blockin,et al.  Vocal Expression of Emotion , 2004 .

[9]  Astrid Paeschke,et al.  A database of German emotional speech , 2005, INTERSPEECH.

[10]  Vitoantonio Bevilacqua,et al.  Face Detection by Means of Skin Detection , 2008, ICIC.

[11]  Daqing Zhang,et al.  Gesture Recognition with a 3-D Accelerometer , 2009, UIC.

[12]  Kai-Tai Song,et al.  A Fast Learning Algorithm for Robotic Emotion Recognition , 2007, 2007 International Symposium on Computational Intelligence in Robotics and Automation.

[13]  Chen Wu,et al.  Model-based human posture estimation for gesture analysis in an opportunistic fusion smart camera network , 2007, 2007 IEEE Conference on Advanced Video and Signal Based Surveillance.

[14]  Amir Jamshidnezhad,et al.  Challenging of Facial Expressions Classification Systems: Survey, Critical Considerations and Direction of Future Work , 2012 .

[15]  Günther Palm,et al.  Emotion Recognition from Speech: Stress Experiment , 2008, LREC.

[16]  Vitoantonio Bevilacqua,et al.  Attention Control during Distance Learning Sessions , 2013, ICIAP Workshops.

[17]  Jin-Jang Leou,et al.  Human Behavior Analysis Using Multiple 2D Features and Multicategory Support Vector Machine , 2009, MVA.

[18]  Vitoantonio Bevilacqua,et al.  First Progresses in Evaluation of Resonance in Staff Selection through Speech Emotion Recognition , 2013, ICIC.

[19]  Fabio Pianesi,et al.  A first evaluation study of a database of kinetic facial expressions (DaFEx) , 2005, ICMI '05.

[20]  Zhijing Liu,et al.  Recognizing Human Activities Using Non-linear SVM Decision Tree , 2011, ICIC 2011.

[21]  Ioannis Pitas,et al.  Automatic emotional speech classification , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[22]  A. Enis Çetin,et al.  Teager energy based feature parameters for speech recognition in car noise , 1999, IEEE Signal Processing Letters.

[23]  Caifeng Shan,et al.  Bodily Expression for Automatic Affect Recognition , 2015 .

[24]  Constantine Kotropoulos,et al.  Emotional speech recognition: Resources, features, and methods , 2006, Speech Commun..

[25]  Amit Konar,et al.  Emotion Recognition: A Pattern Analysis Approach , 2015 .

[26]  Vitoantonio Bevilacqua,et al.  A Novel Multi-Objective Genetic Algorithm Approach to Artificial Neural Network Topology Optimisation: The Breast Cancer Classification Problem , 2006, The 2006 IEEE International Joint Conference on Neural Network Proceedings.

[27]  A. Kendon Gesticulation and Speech: Two Aspects of the Process of Utterance , 1981 .

[28]  Kang-Hyun Jo,et al.  Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence , 2008, Lecture Notes in Computer Science.

[29]  P. Ekman,et al.  Emotion in the Human Face: Guidelines for Research and an Integration of Findings , 1972 .

[30]  Horst-Michael Groß,et al.  Visual-based posture recognition using hybrid neural networks , 1999, ESANN.

[31]  Albino Nogueiras,et al.  Speech emotion recognition using hidden Markov models , 2001, INTERSPEECH.