Automatic and language independent triphone training using phonetic tables [speech recognition]

Training triphone acoustic models for speech recognition is time-consuming and requires important manual intervention. We present an alternative solution, performing automatic training by use of a pronunciation phonetic table which summarizes the articulatory characteristics of the target language. The method is able to train triphones for any language, given an existing set of reference monophones in one or more languages, by automatically performing the tasks of monophone seeding, triphone clustering and other training steps. The automatic nature of the training algorithm lends itself to parameter optimization, which can further improve recognition accuracy with respect to manually trained models. In a continuous digit recognition experiment, it is shown that automatically generated triphone models gave a 1.26% error rate, compared to a 2.30% error rate for its manual counterpart.

[1]  Alexander H. Waibel,et al.  Unsupervised training of a speech recognizer: recent experiments , 1999, EUROSPEECH.

[2]  Hermann Ney,et al.  Automatic question generation for decision tree based state tying , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[3]  Hermann Ney,et al.  Unsupervised training of acoustic models for large vocabulary continuous speech recognition , 2005, IEEE Transactions on Speech and Audio Processing.