Speech Emotion Recognition Using Regularized Discriminant Analysis

Speech emotion recognition plays a vital role in the field of Human Computer Interaction. The aim of speech emotion recognition system is to extract the information from the speech signal and identify the emotional state of a human being. The information extracted from the speech signal is to be appropriate for the analysis of the emotions. This paper analyses the characteristics of prosodic and spectral features. In addition feature fusion technique is also used to improve the performance. We used Linear Discriminant Analysis (LDA), Regularized Discriminant Analysis (RDA), Support Vector Machines (SVM), K-Nearest Neighbor (KNN) as a Classifiers. Results suggest that spectral features outperform prosodic features. Results are validated over Berlin and Spanish emotional speech databases.