Speech recognition of monosyllables using hidden Markov model in VHDL
暂无分享,去创建一个
The paper presented here, describes a real-time speech recognition chip for monosyllables such as A, B... etc. The chip is designed to recognize 4 monosyllables based on the hidden Markov model (HMM), which is a well-known speaker-independent recognition method. The chip accepts a short speech of 185.6 msec and outputs the 2-bit symbol code of the monosyllable. The input speech is divided in to 16 short frames of each 11.6 msec. Features of the speech are extracted from these 16 frames after spectral computations. HMM based speech recognition is done with these extracted features.
[1] L. Rabiner,et al. An introduction to hidden Markov models , 1986, IEEE ASSP Magazine.
[2] Biing-Hwang Juang,et al. Fundamentals of speech recognition , 1993, Prentice Hall signal processing series.