Speech emotion recognition using derived features from speech segment and kernel principal component analysis

Speech emotion recognition is a challenging problem, with identifying efficient features being of particular concern. This paper has two components. First, it presents an empirical study that evaluated four feature reduction methods, chi-square, gain ratio, RELIEF-F, and kernel principal component analysis (KPCA), on utterance level using a support vector machine (SVM) as a classifier. KPCA had the highest F-score when its F-score was compared with the average F-score of the other methods. Using KPCA is more effective than classifying without using feature reduction methods up to 5.73%. The paper also presents an application of statistical functions to raw features from the segment level to derive global features. The features were then reduced using KPCA and classified with SVM. Subsequently, we conducted a majority vote to determine the emotion for the entire utterance. The results demonstrate that this approach outperformed the baseline approaches, which used features from the utterance level, the utterance level with KPCA, the segment level, the segment level with KPCA, and the segment level with the application of statistical functions without KPCA. This yielded a higher F-score at 13.16%, 7.03%, 5.13%, 4.92% and 11.04%, respectively.

[1]  Paul Boersma,et al.  Praat, a system for doing phonetics by computer , 2002 .

[2]  Gerald M. Knapp,et al.  Dimensionality Reduction and Classification Analysis on the Audio Section of the SEMAINE Database , 2011, ACII.

[3]  Eamonn J. Keogh,et al.  Curse of Dimensionality , 2010, Encyclopedia of Machine Learning.

[4]  R. Suganya,et al.  Data Mining Concepts and Techniques , 2010 .

[5]  Dong Yu,et al.  Speech emotion recognition using deep neural network and extreme learning machine , 2014, INTERSPEECH.

[6]  Fabio Valente,et al.  The INTERSPEECH 2013 computational paralinguistics challenge: social signals, conflict, emotion, autism , 2013, INTERSPEECH.

[7]  Björn W. Schuller,et al.  Recent developments in openSMILE, the munich open-source multimedia feature extractor , 2013, ACM Multimedia.

[8]  Lukasz A. Kurgan,et al.  CAIM discretization algorithm , 2004, IEEE Transactions on Knowledge and Data Engineering.

[9]  Shivaji J Chaudhari,et al.  Automatic Speaker Age Estimation and Gender Dependent Emotion Recognition , 2015 .

[10]  Tim Polzehl,et al.  Improving Automatic Emotion Recognition from speech using Rhythm and Temporal feature , 2013, ArXiv.

[11]  Taghi M. Khoshgoftaar,et al.  Feature Selection with High-Dimensional Imbalanced Data , 2009, 2009 IEEE International Conference on Data Mining Workshops.

[12]  Francisco Herrera,et al.  A Survey of Discretization Techniques: Taxonomy and Empirical Analysis in Supervised Learning , 2013, IEEE Transactions on Knowledge and Data Engineering.

[13]  Lawrence D. Fu,et al.  A comprehensive empirical comparison of modern supervised classification and feature selection methods for text categorization , 2014, J. Assoc. Inf. Sci. Technol..

[14]  Igor Kononenko,et al.  Estimating Attributes: Analysis and Extensions of RELIEF , 1994, ECML.

[15]  Rui Pedro Paiva,et al.  Music Emotion Recognition with Standard and Melodic Audio Features , 2015, Appl. Artif. Intell..

[16]  Fakhri Karray,et al.  Survey on speech emotion recognition: Features, classification schemes, and databases , 2011, Pattern Recognit..

[17]  Shohei Kato,et al.  Detection of mild Alzheimer's disease and mild cognitive impairment from elderly speech: Binary discrimination using logistic regression , 2015, 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).