Entropy Based Boundary-Eliminated Pseudo-Inverse Linear Discriminant for Speech Emotion Recognition

Remarkable advances have achieved in speech emotion recognition (SER) with efficient and feasible models. These studies focus on the ability of the model itself. However, they ignore the potential distributed information of speech data. Actually, emotion speech is imbalanced due to the expression of human being. To overcome the imbalanced problems of speech data, the ongoing work furthers our previous study of the Boundary-Eliminated Pseudo-Inverse Linear Discriminant (BEPILD) model through introducing the information entropy that contributes to describing the distribution of the speech data. As a result, an Entropy-based Boundary-Eliminated Pseudo-Inverse Linear Discriminant model (EBEPILD) is proposed to generate more robust hyperplanes to tackle the speech data with high class uncertainty. The experiments conducted on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) database with four emotion states show that the EBEPILD has outstanding performance compared with other algorithms.

[1]  Haibo He,et al.  Learning from Imbalanced Data , 2009, IEEE Transactions on Knowledge and Data Engineering.

[2]  Björn W. Schuller,et al.  The INTERSPEECH 2009 emotion challenge , 2009, INTERSPEECH.

[3]  Fakhri Karray,et al.  Survey on speech emotion recognition: Features, classification schemes, and databases , 2011, Pattern Recognit..

[4]  Carlos Busso,et al.  IEMOCAP: interactive emotional dyadic motion capture database , 2008, Lang. Resour. Evaluation.

[5]  Jürgen Schmidhuber,et al.  Deep learning in neural networks: An overview , 2014, Neural Networks.

[6]  Rafael A. Calvo,et al.  Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications , 2010, IEEE Transactions on Affective Computing.

[7]  Wei Wu,et al.  GMM Supervector Based SVM with Spectral Features for Speech Emotion Recognition , 2007, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07.

[8]  Dong Yu,et al.  Speech emotion recognition using deep neural network and extreme learning machine , 2014, INTERSPEECH.

[9]  Ji Wang,et al.  Threshold optimization of pseudo-inverse linear discriminants based on overall accuracies , 2015, 2015 International Joint Conference on Neural Networks (IJCNN).

[10]  Margaret Lech,et al.  Evaluating deep learning architectures for Speech Emotion Recognition , 2017, Neural Networks.

[11]  Hongyuan Zha,et al.  Boundary-Eliminated Pseudoinverse Linear Discriminant for Imbalanced Problems , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[12]  Seyedmahdad Mirsamadi,et al.  Automatic speech emotion recognition using recurrent neural networks with local attention , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[13]  Björn Schuller,et al.  Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.

[14]  C. E. SHANNON,et al.  A mathematical theory of communication , 1948, MOCO.

[15]  Yanning Zhang,et al.  Hybrid Deep Neural Network--Hidden Markov Model (DNN-HMM) Based Speech Emotion Recognition , 2013, 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction.