Multimodal Emotion Recognition Using Multimodal Deep Learning

To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models from multiple physiological signals. For unimodal enhancement task, we indicate that the best recognition accuracy of 82.11% on SEED dataset is achieved with shared representations generated by Deep AutoEncoder (DAE) model. For multimodal facilitation tasks, we demonstrate that the Bimodal Deep AutoEncoder (BDAE) achieves the mean accuracies of 91.01% and 83.25% on SEED and DEAP datasets, respectively, which are much superior to the state-of-the-art approaches. For cross-modal learning task, our experimental results demonstrate that the mean accuracy of 66.34% is achieved on SEED dataset through shared representations generated by EEG-based DAE as training samples and shared representations generated by eye-based DAE as testing sample, and vice versa.

[1]  Tijmen Tieleman,et al.  Training restricted Boltzmann machines using approximations to the likelihood gradient , 2008, ICML '08.

[2]  Eugene Berezikov,et al.  Crossmodal Interactions Between Olfactory and Visual Learning in Drosophila , 2005 .

[3]  D. Song,et al.  EEG Based Emotion Identification Using Unsupervised Deep Feature Learning , 2015 .

[4]  Olga Sourina,et al.  Real-Time EEG-Based Human Emotion Recognition and Visualization , 2010, 2010 International Conference on Cyberworlds.

[5]  Juhan Nam,et al.  Multimodal Deep Learning , 2011, ICML.

[6]  Rohit Prasad,et al.  Robust EEG emotion classification using segment level decision fusion , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[7]  Yifei Lu,et al.  Combining Eye Movements and EEG to Enhance Emotion Recognition , 2015, IJCAI.

[8]  C. Vinola,et al.  A Survey on Human Emotion Recognition Approaches, Databases and Applications , 2015 .

[9]  Bao-Liang Lu,et al.  Differential entropy feature for EEG-based emotion classification , 2013, 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER).

[10]  Felipe Bravo-Marquez,et al.  Positive, Negative, or Neutral: Learning an Expanded Opinion Lexicon from Emoticon-Annotated Tweets , 2015, IJCAI.

[11]  Aike Guo,et al.  Crossmodal Interactions Between Olfactory and Visual Learning in Drosophila , 2005, Science.

[12]  Bao-Liang Lu,et al.  Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks , 2015, IEEE Transactions on Autonomous Mental Development.

[13]  Nitish Srivastava,et al.  Multimodal learning with deep Boltzmann machines , 2012, J. Mach. Learn. Res..

[14]  M. Bradley,et al.  Memory, emotion, and pupil diameter: Repetition of natural scenes. , 2015, Psychophysiology.

[15]  Yuan Jiang,et al.  Auxiliary Information Regularized Machine for Multiple Modality Feature Learning , 2015, IJCAI.

[16]  Thierry Pun,et al.  DEAP: A Database for Emotion Analysis ;Using Physiological Signals , 2012, IEEE Transactions on Affective Computing.

[17]  Wei Li,et al.  Multi-Modality Tracker Aggregation: From Generative to Discriminative , 2015, IJCAI.

[18]  Jonathan S. A. Carriere,et al.  Distinguishing the roles of trait and state anxiety on the nature of anxiety-related attentional biases to threat using a free viewing eye movement paradigm , 2015, Cognition & emotion.

[19]  Yichen Wang,et al.  Detecting Emotions in Social Media: A Constrained Optimization Approach , 2015, IJCAI.

[20]  Yaacob Sazali,et al.  Classification of human emotion from EEG using discrete wavelet transform , 2010 .

[21]  Geoffrey E. Hinton Training Products of Experts by Minimizing Contrastive Divergence , 2002, Neural Computation.

[22]  Gyanendra K. Verma,et al.  Multimodal fusion framework: A multiresolution approach for emotion classification and recognition from physiological signals , 2014, NeuroImage.