Comparison of similarity among sub-categories of angry speech emotion

Irritation and disgust are two sub-categories of angry emotion that often confuse the detection system based on anger analysis. This work aims to distinguish these secondary emotions from the primary angry state. Recognition and segregation of these states can help the society to maintain a better social relationship. Further, it can be used as an input to the psychologists and medical practitioners and can provide advanced warning before onset of possible full-blown emotion. The traumatic person can be treated with compassion before any unwarranted rupture in social relationship. As these secondary emotions are often similar and overlapped with angry state, a line drawn in between these states will create a favorable platform for effective negotiation. We have attempted to take up this issue based on different similarity measures between these states. Features such as short time Fourier transform (STFT), chirp transform and short time energy (STE) are used to put a boundary among these states. Effective results have been manifested and shown in the result section.

[1]  Jeong-Sik Park,et al.  Multistage data selection-based unsupervised speaker adaptation for personalized speech emotion recognition , 2016, Eng. Appl. Artif. Intell..

[2]  Diego H. Milone,et al.  Emotion Recognition in Never-Seen Languages Using a Novel Ensemble Method with Emotion Profiles , 2017, IEEE Transactions on Affective Computing.

[3]  Farah Chenchah,et al.  A bio-inspired emotion recognition system under real-life conditions , 2017 .

[4]  Chao Li,et al.  Analysis of physiological for emotion recognition with the IRS model , 2016, Neurocomputing.

[5]  Astrid Paeschke,et al.  A database of German emotional speech , 2005, INTERSPEECH.

[6]  Mihir Narayan Mohanty,et al.  Efficient feature combination techniques for emotional speech classification , 2016, International Journal of Speech Technology.

[7]  Hemanta Kumar Palo,et al.  Statistical Feature Based Child Emotion Analysis , 2015 .

[8]  Koteswara Rao Anne,et al.  An optimal two stage feature selection for speech emotion recognition using acoustic features , 2016, Int. J. Speech Technol..

[9]  Ning An,et al.  Speech Emotion Recognition Using Fourier Parameters , 2015, IEEE Transactions on Affective Computing.

[10]  Fu-Ming Lee,et al.  Recognizing low/high anger in speech for call centers , 2008 .

[11]  R. Novaco Anger as a Clinical and Social Problem , 1986 .

[12]  P. Jackson,et al.  Multimodal Emotion Recognition , 2010 .

[13]  Peng Song,et al.  Cross-corpus speech emotion recognition based on transfer non-negative matrix factorization , 2016, Speech Commun..

[14]  Fabien Ringeval,et al.  Facing Realism in Spontaneous Emotion Recognition from Speech: Feature Enhancement by Autoencoder with LSTM Neural Networks , 2016, INTERSPEECH.

[15]  Marie Tahon,et al.  Towards a Small Set of Robust Acoustic Features for Emotion Recognition: Challenges , 2016, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[16]  Tim Polzehl,et al.  Anger recognition in speech using acoustic and linguistic cues , 2011, Speech Commun..

[17]  Paavo Alku,et al.  Automatic detection of anger in telephone speech with robust autoregressive modulation filtering , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[18]  Mihir Narayan Mohanty,et al.  Classification of Emotions of Angry and Disgust , 2015, Smart Comput. Rev..

[19]  Fakhri Karray,et al.  Survey on speech emotion recognition: Features, classification schemes, and databases , 2011, Pattern Recognit..