A Joint Particle Filter and Multi-Step Linear Prediction Framework to Provide Enhanced Speech Features Prior to Automatic Recognition

Automatic speech recognition, which works well on recordings captured with mid- or far-field microphones, is essential for a natural verbal communication between humans and machines. While a great deal of research effort has addressed one of the two distortions frequently encountered in mid- and far-field sound capture, namely non-stationary noise and reverberation, much less work has undertaken to jointly combat both kinds of distortions. In our view, however, this joint approach is essential in order to further reduce catastrophic effects of noise and reverberation that are encountered as soon as the microphone is more than a few centimeters from the speaker's mouth. We propose here to integrate an estimate of the reverberation obtained by multi-step linear prediction into a particle filter framework that tracks and removes non-stationary additive distortions. Evaluations on actual recordings with different speaker to microphone distances demonstrate that techniques combating either non-stationary noise or reverberation can be combined for good effect.

[1]  Tomohiro Nakatani,et al.  Efficient blind dereverberation framework for automatic speech recognition , 2005, INTERSPEECH.

[2]  Friedrich Faubel,et al.  Overcoming the Vector Taylor Series Approximation in Speech Feature Enhancement - A Particle Filter Approach , 2007, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07.

[3]  Mark J. F. Gales,et al.  Maximum likelihood linear transformations for HMM-based speech recognition , 1998, Comput. Speech Lang..

[4]  Friedrich Faubel,et al.  Coupling particle filters with automatic speech recognition for speech feature enhancement , 2006, INTERSPEECH.

[5]  M. Wolfel,et al.  Minimum variance distortionless response spectral estimation , 2005, IEEE Signal Processing Magazine.

[6]  Bhiksha Raj,et al.  Tracking noise via dynamical systems with a continuum of states , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[7]  David Gesbert,et al.  Robust blind channel identification and equalization based on multi-step predictors , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[8]  Philip C. Woodland,et al.  Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models , 1995, Comput. Speech Lang..

[9]  Li Deng,et al.  A Bayesian approach to speech feature enhancement using the dynamic cepstral prior , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[10]  Rüdiger Hoffmann,et al.  The harming part of room acoustics in automatic speech recognition , 2007, INTERSPEECH.

[11]  S. Boll,et al.  Suppression of acoustic noise in speech using spectral subtraction , 1979 .

[12]  M. Wolfel,et al.  Integration of the predictedwalk model estimate into the particle filter framework , 2008, ICASSP 2008.

[13]  Ivan Tashev,et al.  REVEREBERATION REDUCTION FOR IMPROVED SPEECH RECOGNITION , 2004 .

[14]  Sebastian Stüker,et al.  The ISL RT-07 Speech-to-Text System , 2007, CLEAR.