It is well known fact that the accuracy of the speaker identification or speech recognition using the speeches recorded in neutral environment is normally good. It has become a challenging work to improve the accuracy of the recognition system using the speeches recorded in emotional environment. This paper mainly discusses the effectiveness on the use of iterative clustering technique and Gaussian mixture modeling technique (GMM) for recognizing speech and speaker from the emotional speeches using Mel frequency perceptual linear predictive cepstral coefficients (MFPLPC) and MFPLPC concatenated with probability as a feature. For the emotion independent speech recognition, models are created for speeches of archetypal emotions such as boredom, disgust, fear, happy, neutral and sad and testing is done on the speeches of emotion anger. For the text independent speaker recognition, individual models are created for all speakers using speeches of nine utterances and testing is done using the speeches of a tenth utterance. 80 % of the data is used for training and 20 % of the data is used for testing. This system provides the average accuracy of 95 % for text independent speaker recognition and emotion independent speech recognition for the system tested on models developed using MFPLPC and MFPLPC concatenated with probability. Accuracy is increased by 1 %, if the group classification is done prior to speaker classification with reference to the set of male or female speakers forming a group. Text independent speaker recognition is also evaluated by doing group classification using clustering technique and speaker in a group is identified by applying the test vectors on the GMM models corresponding to the small set of speakers in a group and the accuracy obtained is 97 %.
[1]
Shashidhar G. Koolagudi,et al.
Speaker Recognition in Emotional Environment
,
2012
.
[2]
I. Shahin.
Speaker Identification in Emotional Environments
,
2010
.
[3]
Carlos Busso,et al.
Emotion recognition using a hierarchical binary decision tree approach
,
2011,
Speech Commun..
[4]
Y. Venkataramani,et al.
Use of perceptual features in iterative clustering based twins identification system
,
2008,
2008 International Conference on Computing, Communication and Networking.
[5]
Tiago H. Falk,et al.
Automatic speech emotion recognition using modulation spectral features
,
2011,
Speech Commun..
[6]
Elisabeth André,et al.
Improving Automatic Emotion Recognition from Speech via Gender Differentiaion
,
2006,
LREC.
[7]
Douglas A. Reynolds,et al.
Robust text-independent speaker identification using Gaussian mixture speaker models
,
1995,
IEEE Trans. Speech Audio Process..
[8]
Say Wei Foo,et al.
Speech emotion recognition using hidden Markov models
,
2003,
Speech Commun..
[9]
W·M·贝尔特曼,et al.
Speech audio process
,
2011
.
[10]
G KoolagudiShashidhar,et al.
Emotion Recognition from Speech
,
2014
.
[11]
Hynek Hermansky,et al.
RASTA processing of speech
,
1994,
IEEE Trans. Speech Audio Process..
[12]
Ruili Wang,et al.
Ensemble methods for spoken emotion recognition in call-centres
,
2007,
Speech Commun..