Improve the generalization of emotional classifiers across time by using training samples from different days

Electroencephalographic (EEG)-based emotion recognition has attracted increasing attention from the field of human-computer interaction (HCI). However, there are a number of challenges for machines to correctly recognize human emotional states. One problem is how to generalize the emotion model across time, since the brain may show different patterns of EEG for the same emotion experience at different time. This study investigated the feasibility of adding samples from different days to the training set to improve the generalization of the emotion classifier. Eight subjects participated in this experiment, and they were asked to watch different kinds of movie clips to produce neutral, positive or negative emotional states for different trials. Five sessions in five different days were conducted for each subject. EEG signals were recorded throughout the experiment. Support vector machine was used to perform a classification of the three emotional states, in which the training samples may come from 1, 2, 3 or 4 days' sessions but have a same number. The results showed that three categories were classified with average accuracies of 64.9%, 68.7%, 70.9%, and 73.0% respectively for 1-day, 2-day, 3-day and 4-day conditions. N-day condition represented the case in which data from N days were sent to train the SVM and the remaining 5-N days were used to form the testing set. Importantly, the accuracy had an increasing trend with the number of days in the training set for all subjects. Compared with 1-day condition, the accuracy of 4-day condition has about 10% improvement with a peak of 81.2% By analyzing the selected features in 4-day condition, we found the distributions of the selected features were relatively stable across days. The classifier did select emotion-relevant features and reject time-relevant features. These results suggested that incorporation of samples from different days to the training set could improve the generalization of an emotion classifier across time.

[1]  Isabelle Guyon,et al.  An Introduction to Variable and Feature Selection , 2003, J. Mach. Learn. Res..

[2]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[3]  Fuhui Long,et al.  Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy , 2003, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Monson H. Hayes,et al.  Statistical Digital Signal Processing and Modeling , 1996 .

[5]  M. Balconi,et al.  EEG correlates (event-related desynchronization) of emotional face elaboration: A temporal analysis , 2006, Neuroscience Letters.

[6]  Bao-Liang Lu,et al.  Emotional state classification from EEG data using machine learning approach , 2014, Neurocomputing.

[7]  Henry Horng-Shing Lu,et al.  Statistical Prediction of Emotional States by Physiological signals with MANOVA and Machine Learning , 2012, Int. J. Pattern Recognit. Artif. Intell..

[8]  Léon J. M. Rothkrantz,et al.  Emotion recognition using brain activity , 2008, CompSysTech.

[9]  Dong Ming,et al.  Randomly dividing homologous samples leads to overinflated accuracies for emotion recognition. , 2015, International journal of psychophysiology : official journal of the International Organization of Psychophysiology.

[10]  Yanda Li,et al.  Automatic removal of the eye blink artifact from EEG using an ICA-based template matching approach , 2006, Physiological measurement.

[11]  Seong Youb Chung,et al.  EEG-based emotion estimation using Bayesian weighted-log-posterior function and perceptron convergence algorithm , 2013, Comput. Biol. Medicine.

[12]  H. Jasper,et al.  The ten-twenty electrode system of the International Federation. The International Federation of Clinical Neurophysiology. , 1999, Electroencephalography and clinical neurophysiology. Supplement.

[13]  R. Nagarajan,et al.  Combining Spatial Filtering and Wavelet Transform for Classifying Human Emotions Using EEG Signals , 2011 .

[14]  Minho Lee,et al.  Emotion recognition based on 3D fuzzy visual and EEG features in movie clips , 2014, Neurocomputing.

[15]  Gyanendra K. Verma,et al.  Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals , 2014, NeuroImage.

[16]  Wee Ser,et al.  Discrete Wavelet Transform coefficients for emotion recognition from EEG signals , 2012, 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[17]  Bao-Liang Lu,et al.  EEG-based emotion recognition during watching movies , 2011, 2011 5th International IEEE/EMBS Conference on Neural Engineering.

[18]  Thierry Pun,et al.  DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.

[19]  Thierry Pun,et al.  Multimodal Emotion Recognition in Response to Videos , 2012, IEEE Transactions on Affective Computing.

[20]  Isabel M. Santos,et al.  Spectral turbulence measuring as feature extraction method from EEG on affective computing , 2013, Biomed. Signal Process. Control..

[21]  Julien Penders,et al.  Towards wireless emotional valence detection from EEG , 2011, 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[22]  M. Bradley,et al.  Affective reactions to acoustic stimuli. , 2000, Psychophysiology.