Emotion recognition from syllabic units using KNN classification and energy distribution

In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (ANGER, SADNESS, JOY, and NEUTRAL). The proposed features vector is composed by twenty-eight measurements corresponding to standard acoustic features such as formants, fundamental frequency… (obtained by Praat software) as well as introducing new features based on the calculation of the energies in some specific frequency bands and their distributions (thanks to MATLAB codes). The extracted measurements are obtained from syllabic units CV (consonant/vowel) derived from MADED corpus (Moroccan Arabic Dialect Emotional Database). Thereafter, the data which has been collected is then trained by a K-Nearest-Neighbor Classifier to perform the automated recognition phase. The results reach 64.65% in the multi-class classification and 94.95% for classification between positive and negative emotions