The Impact of Attention Mechanisms on Speech Emotion Recognition

Speech emotion recognition (SER) plays an important role in real-time applications of human-machine interaction. The Attention Mechanism is widely used to improve the performance of SER. However, the applicable rules of attention mechanism are not deeply discussed. This paper discussed the difference between Global-Attention and Self-Attention and explored their applicable rules to SER classification construction. The experimental results show that the Global-Attention can improve the accuracy of the sequential model, while the Self-Attention can improve the accuracy of the parallel model when conducting the model with the CNN and the LSTM. With this knowledge, a classifier (CNN-LSTM×2+Global-Attention model) for SER is proposed. The experiments result show that it could achieve an accuracy of 85.427% on the EMO-DB dataset.

[1]  Astrid Paeschke,et al.  A database of German emotional speech , 2005, INTERSPEECH.

[2]  Jianfeng Zhao,et al.  Speech emotion recognition using deep 1D & 2D CNN LSTM networks , 2019, Biomed. Signal Process. Control..

[3]  Yousaf Bin Zikria,et al.  Impact of Feature Selection Algorithm on Speech Emotion Recognition Using Deep Convolutional Neural Network , 2020, Sensors.

[4]  Jinkyu Lee,et al.  High-level feature representation using recurrent neural network for speech emotion recognition , 2015, INTERSPEECH.

[5]  Abeer Alsadoon,et al.  Speech Emotion Recognition UsingConvolutional Neural Network and Long-Short TermMemory , 2020, Multimedia Tools and Applications.

[6]  Mustaqeem,et al.  Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features , 2020, Sensors.

[7]  Klaus R. Scherer,et al.  Vocal communication of emotion: A review of research paradigms , 2003, Speech Commun..

[8]  Weishan Zhang,et al.  Emotion Recognition from Chinese Speech for Smart Affective Services Using a Combination of SVM and DBN , 2017, Sensors.

[9]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[10]  Xuetian Wang,et al.  Speech Emotion Recognition Using Convolutional- Recurrent Neural Networks with Attention Model , 2017 .

[11]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[12]  Zhong-Qiu Wang,et al.  Learning utterance-level representations for speech emotion and age/gender recognition using deep neural networks , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[13]  Adnan Yazici,et al.  Speech emotion recognition with deep convolutional neural networks , 2020, Biomed. Signal Process. Control..

[14]  Jeehyun Yang,et al.  Robot magic show: human–robot interaction , 2020, The Knowledge Engineering Review.

[15]  Sung Wook Baik,et al.  Deep features-based speech emotion recognition for smart affective services , 2017, Multimedia Tools and Applications.

[16]  Björn W. Schuller,et al.  An Image-based Deep Spectrum Feature Representation for the Recognition of Emotional Speech , 2017, ACM Multimedia.

[17]  Yu Zheng,et al.  Exploring Spatio-Temporal Representations by Integrating Attention-based Bidirectional-LSTM-RNNs and FCNs for Speech Emotion Recognition , 2018, INTERSPEECH.

[18]  Jianwu Dang,et al.  A Feature Fusion Method Based on Extreme Learning Machine for Speech Emotion Recognition , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[19]  Ron Hoory,et al.  Efficient Emotion Recognition from Speech Using Deep Learning on Spectrograms , 2017, INTERSPEECH.

[20]  Jing Yang,et al.  3-D Convolutional Recurrent Neural Networks With Attention Model for Speech Emotion Recognition , 2018, IEEE Signal Processing Letters.

[21]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[22]  Razvan Pascanu,et al.  How to Construct Deep Recurrent Neural Networks , 2013, ICLR.

[23]  Shao-Lun Huang,et al.  Attention-based LSTM-CNNs For Time-series Classification , 2018, SenSys.