The hidden Markov model (HMM) is a natural and highly robust statistical methodology for automatic speech recognition. It is also being tested and proved considerably important in a wide range of applications. The model parameters of the HMM are essential in describing the behavior of the utterance of the speech segments. Many successful heuristic algorithms are developed to optimize the model parameters in order to best describe the trained observation sequences. However, all these methodologies explore for only one local maxima in practice. No one methodology can recover from the local maxima the global maxima or other more optimized local maxima. A stochastic search method called the genetic algorithm (GA) is presented for HMM training. The GA mimics natural evolution and perform global searching within the defined searching space. Experiments showed that using the GA for HMM training (GA-HMM training) result in a better performance than using other heuristic algorithms.
[1]
L. R. Rabiner,et al.
An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition
,
1983,
The Bell System Technical Journal.
[2]
Biing-Hwang Juang,et al.
Fundamentals of speech recognition
,
1993,
Prentice Hall signal processing series.
[3]
L. Baum,et al.
A Maximization Technique Occurring in the Statistical Analysis of Probabilistic Functions of Markov Chains
,
1970
.
[4]
F. Jelinek,et al.
Continuous speech recognition by statistical methods
,
1976,
Proceedings of the IEEE.
[5]
Runhe Huang,et al.
Implementing the Genetic Algorithm on Transputer Based Parallel Processing Systems
,
1990,
PPSN.
[6]
Lalit M. Patnaik,et al.
Genetic algorithms: a survey
,
1994,
Computer.