Context dependent phonetic string edit distance for automatic speech recognition

An automatic speech recognition system searches for the word transcription with the highest overall score for a given acoustic observation sequence. This overall score is typically a weighted combination of a language model score and an acoustic model score. We propose including a third score, which measures the similarity of the word transcription's pronunciation to the output of a less constrained phonetic recognizer. We show how this phonetic string edit distance can be learned from data, and that including context in the model is essential for good performance. We demonstrate improved accuracy on a business search task.

[1]  Peter N. Yianilos,et al.  Learning String-Edit Distance , 1996, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Marc Sebban,et al.  Learning stochastic edit distance: Application in handwritten character recognition , 2006, Pattern Recognit..

[3]  Geoffrey Zweig,et al.  Live search for mobile:Web services by voice on the cellphone , 2008, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing.

[4]  Geoffrey Zweig,et al.  Empirical properties of multilingual phone-to-word transduction , 2008, 2008 IEEE International Conference on Acoustics, Speech and Signal Processing.

[5]  Geoffrey Zweig,et al.  Maximum mutual information multi-phone units in direct modeling , 2009, INTERSPEECH.