Competitive training in hidden Markov models (speech recognition)

The use of hidden Markov models is placed in a connectionist framework, and an alternative approach to improving their ability to discriminate between classes is described. Using a network style of training, a measure of discrimination based on the a posteriori probability of state occupation is proposed, and the theory for its optimization using error backpropagation and gradient ascent is presented. The method is shown to be numerically well behaved, and the results are presented which demonstrate that when using a simple threshold test on the probability of state occupation, the proposed optimization scheme leads to improved recognition performance.<<ETX>>

[1]  Richard W. Prager,et al.  Comparison of neural and conventional classifiers on a speech recognition problem , 1989 .

[2]  Lalit R. Bahl,et al.  A new algorithm for the estimation of hidden Markov model parameters , 1988, ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing.

[3]  L. R. Rabiner,et al.  An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition , 1983, The Bell System Technical Journal.

[4]  Alex Waibel,et al.  Phoneme recognition: neural networks vs. hidden Markov models vs. hidden Markov models , 1988, ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing.

[5]  Yariv Ephraim,et al.  On the relations between modeling approaches for information sources (speech recognition) , 1988, ICASSP-88., International Conference on Acoustics, Speech, and Signal Processing.

[6]  Yariv Ephraim,et al.  s1.2 On the Relations Between Modeling Approaches for Information Sources , 1988 .

[7]  Roger K. Moore,et al.  Minimally distinct word-pair discrimination using a back-propagation network , 1989 .

[8]  Lalit R. Bahl,et al.  Maximum mutual information estimation of hidden Markov model parameters for speech recognition , 1986, ICASSP '86. IEEE International Conference on Acoustics, Speech, and Signal Processing.