Feature extraction algorithms to improve the speech emotion recognition rate

In this digitally growing era speech emotion recognition plays significant role in several applications such as Human Computer Interface (HCI), lie detection, automotive system to assist steering, intelligent tutoring system, audio mining, security, Telecommunication, Interaction between a human and machine at home, hospitals, shops etc. Speech is a unique human characteristic used as a tool to communicate and express one’s perspective to others. Speech emotion recognition is extracting the emotions of the speaker from his or her speech signal. Feature extraction, Feature selection and classifier are three main stages of the emotion recognition. The main aim of this work is to improve the speech emotion recognition rate of a system using the different feature extraction algorithms. The work emphasizes on the preprocessing of the received audio samples where the noise from speech samples is removed using filters. In next step, the Mel Frequency Cepstral Coefficients (MFCC), Discrete Wavelet Transform (DWT), pitch, energy and Zero crossing rate (ZCR) algorithms are used for extracting the features. In feature selection stage Global feature algorithm is used to remove redundant information from features and to identify the emotions from extracted features machine learning classification algorithms are used. These feature extraction algorithms are validated for universal emotions comprising Anger, Happiness, Sad and Neutral.

[1]  M. S. Likitha,et al.  Speech based human emotion recognition using MFCC , 2017, 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET).

[2]  Chai Wutiwiwatchai,et al.  A study of support vector machines for emotional speech recognition , 2017, 2017 8th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES).

[3]  Sonali T. Saste,et al.  Emotion recognition from speech using MFCC and DWT for security system , 2017, 2017 International conference of Electronics, Communication and Aerospace Technology (ICECA).

[4]  Fakhri Karray,et al.  Survey on speech emotion recognition: Features, classification schemes, and databases , 2011, Pattern Recognit..

[5]  Sung Wook Baik,et al.  Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest , 2016, ArXiv.

[6]  Stefan Bleeck,et al.  Relationship between speech recognition in noise and sparseness , 2012, International journal of audiology.

[7]  S. R. Livingstone,et al.  The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English , 2018, PloS one.

[8]  Shambhavi. S. Sheerur,et al.  Emotion Speech Recognition using MFCC and SVM , 2015 .

[9]  Noor Aina Zaidan,et al.  MFCC Global Features Selection in Improving Speech Emotion Recognition Rate , 2016, ICML 2016.

[10]  R S.R. Krishnam Naidu,et al.  Real- time implementation of parallel type fuzzy- PID controller for effective control of hybrid Pole self bearing Switched reluctance motor , 2018 .

[11]  Anil Kumar Budati,et al.  Identify the user presence by GLRT and NP detection criteria in cognitive radio spectrum sensing , 2019, Int. J. Commun. Syst..

[12]  Lijiang Chen,et al.  Speech emotion recognition: Features and classification models , 2012, Digit. Signal Process..

[13]  Zheng Fang,et al.  Comparison of different implementations of MFCC , 2001 .