In this paper we evaluate INTERSPEECH 2009 Emotion Recognition Challenge results. The challenge presents the problem of accurate classification of natural and emotionally rich FAU Aibo recordings into five and two emotion classes. We evaluate prosody related, spectral and HMM-based features with Gaussian mixture model (GMM) classifiers to attack this problem. Spectral features consist of mel-scale cepstral coefficients (MFCC), line spectral frequency (LSF) features and their derivatives, whereas prosody-related features consist of pitch, first derivative of pitch and intensity. We employ unsupervised training of HMM structures with prosody related temporal features to define HMM-based features. We also investigate data fusion of different features and decision fusion of different classifiers to improve emotion recognition results. Our two-stage decision fusion method achieves 41.59 % and 67.90 % recall rate for the five and two-class problems, respectively and takes second and fourth place among the overall challenge results.
[1]
John H. L. Hansen,et al.
Discrete-Time Processing of Speech Signals
,
1993
.
[2]
F. Itakura.
Line spectrum representation of linear predictor coefficients of speech signals
,
1975
.
[3]
A. Tanju Erdem,et al.
Improving automatic emotion recognition from speech signals
,
2009,
INTERSPEECH.
[4]
Björn W. Schuller,et al.
The INTERSPEECH 2009 emotion challenge
,
2009,
INTERSPEECH.
[5]
A. Murat Tekalp,et al.
Multimodal speaker identification using an adaptive classifier cascade based on modality reliability
,
2005,
IEEE Transactions on Multimedia.
[6]
Stefan Steidl,et al.
Automatic classification of emotion related user states in spontaneous children's speech
,
2009
.