Robust EEG emotion classification using segment level decision fusion

In this paper we address single-trial binary classification of emotion dimensions (arousal, valence, dominance and liking) using electroencephalogram (EEG) signals that represent responses to audio-visual stimuli. We propose an innovative three step solution to this problem: (1) in contrast to the typical feature extraction on the response-level, we represent the EEG signal as a sequence of overlapping segments and extract feature vectors on the segment level; (2) transform segment level features to the response level features using projections based on a novel non-parametric nearest neighbor model; and (3) perform classification on the obtained response-level features. We demonstrate the efficacy of our approach by performing binary classification of emotion dimensions on DEAP (Dataset for Emotion Analysis using electroencephalogram, Physiological and Video Signals) and report state-of-the-art classification accuracies for all emotional dimensions.

[1]  Björn W. Schuller,et al.  Recognising realistic emotions and affect in speech: State of the art and lessons learnt from the first challenge , 2011, Speech Commun..

[2]  Mohammad Soleymani,et al.  Single Trial Classification of EEG and Peripheral Physiological Signals for Recognition of Emotions Induced by Music Videos , 2010, Brain Informatics.

[3]  Yuan-Pin Lin,et al.  EEG-Based Emotion Recognition in Music Listening , 2010, IEEE Transactions on Biomedical Engineering.

[4]  Bao-Liang Lu,et al.  EEG-based emotion recognition during watching movies , 2011, 2011 5th International IEEE/EMBS Conference on Neural Engineering.

[5]  Thierry Pun,et al.  DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.

[6]  Olga Sourina,et al.  Real-Time EEG-Based Emotion Recognition and Its Applications , 2011, Trans. Comput. Sci..

[7]  T. Minka Estimating a Dirichlet distribution , 2012 .

[8]  Olivier Poch,et al.  A maximum likelihood approximation method for Dirichlet's parameter estimation , 2008, Comput. Stat. Data Anal..

[9]  K. Takahashi,et al.  Remarks on SVM-based emotion recognition from multi-modal bio-potential signals , 2004, RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No.04TH8759).

[10]  Christine L. Lisetti,et al.  Emotion recognition from physiological signals using wireless sensors for presence technologies , 2004, Cognition, Technology & Work.

[11]  Bernhard Schölkopf,et al.  Kernel Principal Component Analysis , 1997, ICANN.

[12]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[13]  Leontios J. Hadjileontiadis,et al.  Emotion Recognition from Brain Signals Using Hybrid Adaptive Filtering and Higher Order Crossings Analysis , 2010, IEEE Transactions on Affective Computing.

[14]  Trevor Darrell,et al.  The NBNN kernel , 2011, 2011 International Conference on Computer Vision.

[15]  Cecilia Ovesdotter Alm,et al.  Emotions from Text: Machine Learning for Text-based Emotion Prediction , 2005, HLT.

[16]  Eli Shechtman,et al.  In defense of Nearest-Neighbor based image classification , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  Kai Keng Ang,et al.  EEG-based Emotion Recognition Using Self-Organizing Map for Boundary Detection , 2010, 2010 20th International Conference on Pattern Recognition.

[18]  Zhigang Deng,et al.  Analysis of emotion recognition using facial expressions, speech and multimodal information , 2004, ICMI '04.