Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition

This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.

[1]  Jason Farquhar,et al.  A multi-signature brain–computer interface: use of transient and steady-state responses , 2013, Journal of neural engineering.

[2]  P. Ekman,et al.  Facial action coding system: a technique for the measurement of facial movement , 1978 .

[3]  Bin Gu,et al.  Incremental Support Vector Learning for Ordinal Regression , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[4]  Bao-Liang Lu,et al.  A multimodal approach to estimating vigilance using EEG and forehead EOG , 2016, Journal of neural engineering.

[5]  Mohammad Soleymani,et al.  Short-term emotion assessment in a recall paradigm , 2009, Int. J. Hum. Comput. Stud..

[6]  P. Lang International affective picture system (IAPS) : affective ratings of pictures and instruction manual , 2005 .

[7]  Anne-Marie Brouwer,et al.  Measuring workload using a combination of electroencephalography and near infrared spectroscopy , 2012 .

[8]  Masafumi Hagiwara,et al.  A feeling estimation system using a simple electroencephalograph , 2003, SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483).

[9]  M. Bradley,et al.  Measuring emotion: the Self-Assessment Manikin and the Semantic Differential. , 1994, Journal of behavior therapy and experimental psychiatry.

[10]  Stacy Marsella,et al.  Evaluating a Computational Model of Emotion , 2005, Autonomous Agents and Multi-Agent Systems.

[11]  Yuanqing Li,et al.  A comparison study of two P300 speller paradigms for brain–computer interface , 2013, Cognitive Neurodynamics.

[12]  Yuanqing Li,et al.  A Hybrid BCI System Combining P300 and SSVEP and Its Application to Wheelchair Control , 2013, IEEE Transactions on Biomedical Engineering.

[13]  Arun Ross,et al.  Multimodal biometrics: An overview , 2004, 2004 12th European Signal Processing Conference.

[14]  Rosalind W. Picard,et al.  Evaluating affective interactions: Alternatives to asking what users feel , 2005 .

[15]  Elena Zaitseva,et al.  Classification of EEG signals using feature creation produced by grammatical evolution , 2016, 2016 24th Telecommunications Forum (TELFOR).

[16]  Johannes Wagner,et al.  Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data , 2011, IEEE Transactions on Affective Computing.

[17]  Wei Wu,et al.  Multimodal BCIs: Target Detection, Multidimensional Control, and Awareness Evaluation in Patients With Disorder of Consciousness , 2016, Proceedings of the IEEE.

[18]  Arun Ross,et al.  Information fusion in biometrics , 2003, Pattern Recognit. Lett..

[19]  Patrick J. Flynn,et al.  Face Recognition Using 2D and 3D Facial Data , 2003 .

[20]  Maarten A. Hogervorst,et al.  Combining and comparing EEG, peripheral physiology and eye-related measures for the assessment of mental workload , 2014, Front. Neurosci..

[21]  Ricardo Chavarriaga,et al.  A hybrid brain–computer interface based on the fusion of electroencephalographic and electromyographic activities , 2011, Journal of neural engineering.

[22]  James C. Christensen,et al.  The effects of day-to-day variability of physiological data on operator functional state classification , 2012, NeuroImage.

[23]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.

[24]  G. Lightbody,et al.  EEG-based neonatal seizure detection with Support Vector Machines , 2011, Clinical Neurophysiology.

[25]  Seyyed Mohammad Reza Hashemi,et al.  Classification of EEG-based emotion for BCI applications , 2017, 2017 Artificial Intelligence and Robotics (IRANOPEN).

[26]  Khashayar Khorasani,et al.  Facial expression recognition using constructive feedforward neural networks , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[27]  G. Bersani,et al.  Facial Action Coding System (FACS): uno strumento di valutazione obiettiva dell’espressività mimica facciale e le sue potenziali applicazioni allo studio della schizofrenia , 2012 .

[28]  M Congedo,et al.  A review of classification algorithms for EEG-based brain–computer interfaces , 2007, Journal of neural engineering.

[29]  Edward Szczerbicki,et al.  Video Semantic Analysis Framework based on Run-time Production Rules - Towards Cognitive Vision , 2015, J. Univers. Comput. Sci..

[30]  Kenji Mase,et al.  Recognition of Facial Expression from Optical Flow , 1991 .

[31]  Bin Gu,et al.  A Robust Regularization Path Algorithm for $\nu $ -Support Vector Classification , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[32]  Zahra Khalili,et al.  Emotion recognition system using brain and peripheral signals: Using correlation dimension to improve the results of EEG , 2009, 2009 International Joint Conference on Neural Networks.