Noise robust speech parameterization using multiresolution feature extraction

In this paper, we present a multiresolution-based feature extraction technique for speech recognition in adverse conditions. The proposed front-end algorithm uses mel cepstrum-based feature computation of subbands in order not to spread noise distortions over the entire feature space. Conventional full-band features are also augmented to the final feature vector which is fed to the recognition unit. Other novel features of the proposed front-end algorithm include emphasis of long-term spectral information combined with cepstral domain feature vector normalization and the use of the PCA transform, instead of DCT, to provide the final cepstral parameters. The proposed algorithm was experimentally evaluated in a connected digit recognition task under various noise conditions. The results obtained show that the new feature extraction algorithm improves word recognition accuracy by 41 % when compared to the performance of mel cepstrum front-end. A substantial increase in recognition accuracy was observed in all tested noise environments at all different SNRs. The good performance of the multiresolution front-end is not only due to the higher feature vector dimension, but the proposed algorithm clearly outperformed the mel cepstral front-end when the same number of HMM parameters were used in both systems. We also propose methods to reduce the computational complexity of the multiresolution front-end-based speech recognition system. Experimental results indicate the viability of the proposed techniques.

[1]  Jont B. Allen,et al.  How do humans process and recognize speech? , 1993, IEEE Trans. Speech Audio Process..

[2]  Hynek Hermansky,et al.  RASTA processing of speech , 1994, IEEE Trans. Speech Audio Process..

[3]  Olli Viikki,et al.  A recursive feature vector normalization approach for robust speech recognition in noise , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[4]  Nikki Mirghafori,et al.  Combining connectionist multi-band and full-band probability streams for speech recognition of natural numbers , 1998, ICSLP.

[5]  Jérôme Boudy,et al.  Experiments with a nonlinear spectral subtractor (NSS), Hidden Markov models and the projection, for robust speech recognition in cars , 1991, Speech Commun..

[6]  Imre Kiss,et al.  Multi-resolution front-end for noise robust speech recognition , 2000, INTERSPEECH.

[7]  Hervé Bourlard,et al.  A mew ASR approach based on independent processing and recombination of partial frequency bands , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[8]  Misha Pavel,et al.  Towards ASR on partially corrupted speech , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[9]  Olli Viikki,et al.  Cepstral domain segmental feature vector normalization for noise robust speech recognition , 1998, Speech Commun..

[10]  Alexandros Potamianos,et al.  Multi-band speech recognition in noisy environments , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[11]  Keinosuke Fukunaga,et al.  Introduction to statistical pattern recognition (2nd ed.) , 1990 .

[12]  Li Deng,et al.  Analysis of acoustic-phonetic variations in fluent speech using TIMIT , 1995, 1995 International Conference on Acoustics, Speech, and Signal Processing.

[13]  Philip C. Woodland,et al.  Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models , 1995, Comput. Speech Lang..

[14]  Hynek Hermansky,et al.  TRAPS - classifiers of temporal patterns , 1998, ICSLP.

[15]  Hynek Hermansky,et al.  Sub-band based recognition of noisy speech , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[16]  Mark J. F. Gales,et al.  Cepstral parameter compensation for HMM recognition in noise , 1993, Speech Commun..

[17]  Richard O. Duda,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[18]  H Hermansky,et al.  Perceptual linear predictive (PLP) analysis of speech. , 1990, The Journal of the Acoustical Society of America.

[19]  Chin-Hui Lee,et al.  Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains , 1994, IEEE Trans. Speech Audio Process..

[20]  Aaron E. Rosenberg,et al.  Cepstral channel normalization techniques for HMM-based speaker verification , 1994, ICSLP.

[21]  Daniel P. W. Ellis,et al.  Multi-stream speech recognition: ready for prime time? , 1999, EUROSPEECH.

[22]  Stan Davis,et al.  Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Se , 1980 .

[23]  Hynek Hermansky,et al.  Should recognizers have ears? , 1998, Speech Commun..

[24]  Hynek Hermansky,et al.  Spectral basis functions from discriminant analysis , 1998, ICSLP.