The Gaussian mixture modeling (GMM) techniques are increasingly being used for both speaker identification and verification. Most of these models assume diagonal covariance matrices. Although empirically any distribution can be approximated with a diagonal GMM, a large number of mixture components are usually needed to obtain a good approximation. A consequence of using a large GMM is that its training is time consuming and its response speed is very slow. This paper proposes a modification to the standard diagonal GMM approach. The proposed scheme includes an orthogonal transformation: feature vectors are first transformed to the space spanned by the eigenvectors of the covariance matrix before applying to the diagonal GMM. Only a small computational load is introduced by this transformation, but results from both speaker identification and verification experiments indicated that the orthogonal transformation considerably improves the recognition performance. For a specific performance level, the GMM with orthogonal transform needs only one-fourth the number of Gaussian functions required by the standard GMM.
[1]
Hsiao-Chuan Wang,et al.
Gaussian mixture models with common principal axes and their application in text-independent speaker identification
,
1997,
EUROSPEECH.
[2]
Til T. Phan,et al.
Text-Independent Speaker Identification
,
1999
.
[3]
Douglas A. Reynolds,et al.
Speaker identification and verification using Gaussian mixture speaker models
,
1995,
Speech Commun..
[4]
Ivan Magrin-Chagnolleau,et al.
Second-order statistical measures for text-independent speaker identification
,
1995,
Speech Commun..
[5]
Douglas A. Reynolds,et al.
Robust text-independent speaker identification using Gaussian mixture speaker models
,
1995,
IEEE Trans. Speech Audio Process..
[6]
H. Gish,et al.
Text-independent speaker identification
,
1994,
IEEE Signal Processing Magazine.