Iterative feature normalization for emotional speech detection

Contending with signal variability due to source and channel effects is a critical problem in automatic emotion recognition. Any approach in mitigating these effects however has to be done so as to not compromise emotion-relevant information in the signal. A promising approach to this problem has been through feature normalization using features drawn from non-emotional (“neutral”) speech samples. This paper considers a scheme for minimizing the inter-speaker differences while still preserving the emotional discrimination of the acoustic features. This can be achieved by estimating the normalization parameters using only neutral speech, and then applying the coefficients to the entire corpus (including emotional set). Specifically, this paper introduces a feature normalization scheme that implements these ideas by iteratively detecting neutral speech and normalizing the features. As the approximation error of the normalization parameters is reduced, the accuracy of the emotion detection system increases. The accuracy of the proposed iterative approach, evaluated across three databases, is only 2.5% lower than the one trained with optimal normalization parameters, and 9.7% higher than the one trained without any normalization scheme.