Distributed hidden Markov model training on loosely-coupled multiprocessor networks

An explicit-duration hidden Markov model (HMM) algorithm for speech recognition has been proposed that potentially provides a more precise and versatile duration model than the implicit models ordinarily used, but at the cost of increased computation. The authors address the computational issues involved in conventional and explicit-duration HMM training by providing an analysis of the algorithm, suggesting serial enhancements and two efficient parallel implementations, and presenting experimental results on both common network workstations as well as a parallel system.<<ETX>>