Stochastic matching for robust speech recognition

Presents an approach to decrease the acoustic mismatch between a test utterance Y and a given set of speech hidden Markov models /spl Lambda//sub X/ to reduce the recognition performance degradation caused by possible distortions in the test utterance. This is accomplished by a parametric function that transforms either U or /spl Lambda//sub X/ to better match each other. The functional form of the transformation depends on prior knowledge about the mismatch, and the parameters are estimated along with the recognized string in a maximum-likelihood manner. experimental results verify the efficacy of the approach in improving the performance of a continuous speech recognition system in the presence of mismatch due to different transducers and transmission channels.<<ETX>>