On multiple transition branch hidden Markov models

In this paper we discuss the basic theory of the probabilistic function of a multiple branch hidden Markov model (MBHMM) for the purposes of automatic speech recognition. Since it has multiple transition branches between two states, the new model can hold much more spectral information in the speech signal than the basic ones, which have only one transition branch between the states. The evaluation, decoding, and training algorithms associated with MBHMM are also derived. The resulting recognizer is tested on a vocabulary of ten Chinese digits over 28 speakers. The recognition results show that MBHMM outperforms the conventional ones.<<ETX>>