Linear discriminant - a new criterion for speaker normalization

In Vocal Tract Length Normalization (VTLN) a linear or nonlinear frequency transformation compensates for different vocal tract lengths. Finding good estimates for the speaker specific warp parameters is a critical issue. Despite good results using the Maximum Likelihood criterion to find parameters for a linear warping, there are concerns using this method. We searched for a new criterion that enhances the inter-class separability in addition to optimizing the distribution of each phonetic class. Using such a criterion, Linear Discriminant Analysis determines a linear transformation in a lower dimensional space. For VTLN, we keep the dimension constant and warp the training samples of each speaker such that the Linear Discriminant is optimized. Although that criterion depends on all training samples of all speakers it can iteratively provide speaker specific warp factors. We discuss how this approach can be applied in speech recognition and present first results on two different recognition tasks. 1 Speaker Normalization using VTLN Vocal Tract Length Normalization (VTLN) has proven to decrease the word error rate of a speech recognition system, compared to systems not using such an approach to reduce the variability introduced by different speakers. The main effect addressed here is a shift of the formant frequencies of the speakers caused by their different vocal tract lengths. Two issues have been investigated. The first is how to map one speaker’s spectrum on that of a “standard” or average speaker, depending on a warp parameter which is correlated with the vocal tract length. The other issue is how to find an appropriate warp parameter for each speaker. Most studies assume that the same algorithm is used for training and test, but this is not always necessary. [Acero (1990)] has used a bilinear transform with one speaker dependent parameter. In a first attempt he observed that the algorithm chose a degenerate case where all input frames are transformed into a constant. Therefore, he enforced a constant average warping parameter over all speakers. Modeling the vocal tract as a uniform tube of length L, the formant frequencies are proportional to 1/L. Therefore, some approaches use a linear warp of the frequency scale to normalize speakers. The warp can be performed in the time or spectral domain. In the latter case, a new spectrum is derived by interpolation or by modifying the Mel frequency filter bank. When the warp is applied in the spectral domain, the problem of mismatching frequency ranges occurs. [Wegmann et al (1996)] used a piecewise linear spectral mapping to avoid this problem. They estimated the slope of the transformation function based on a maximum likelihood criterion. [Eide and Gish (1996)] proposed a compromise of different vowel models, namely the uniform tube model and the Helmholtz resonator. They warped the frequency axis f of a speaker according to

[1]  Tanja Schultz,et al.  Language independent and language adaptive large vocabulary speech recognition , 1998, ICSLP.

[2]  Puming Zhan,et al.  Speaker normalization based on frequency warping , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[3]  Klaus Ries,et al.  The Karlsruhe-Verbmobil speech recognition engine , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[4]  Alejandro Acero,et al.  Acoustical and environmental robustness in automatic speech recognition , 1991 .

[5]  Keinosuke Fukunaga,et al.  Introduction to Statistical Pattern Recognition , 1972 .

[6]  Herbert Gish,et al.  A parametric approach to vocal tract length normalization , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.

[7]  Evandro B. Gouvêa,et al.  Speaker normalization through formant-based warping of the frequency scale , 1997, EUROSPEECH.

[8]  S. Wegmann,et al.  Speaker normalization on conversational telephone speech , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.

[9]  Li Lee,et al.  Speaker normalization using efficient frequency warping procedures , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.

[10]  李幼升,et al.  Ph , 1989 .