A Novel Information Integration Algorithm for Speech Recognition System: Basing on Adaptive Clustering and Supervised State of Acoustic Feature

When utilizing the most likely state sequence (MLSS) criterion in Gauss mixture model-hidden Markov model (GMM-HMM) to acquire the best state series of observations, only the maximum likelihood state of speech frame is considered. Therefore, the influence of other states is neglected, which leads to the losing of some important information, and further reduces the recognition rate of the system. In this paper, we propose two new features, which are called state likelihood cluster feature (SLCF) and supervised state feature (SSF), to both reflect acoustic features and fuse state information. Combining SLCF and SSF with Mel frequency cepstrum coefficient (MFCC), Mel frequency cepstrum & state likelihood cluster feature (MSLCF) and Mel frequency cepstrum & supervised state feature (MSSF) are formed, respectively. By the proposed MSLCF and MSSF in Chinese speech recognition experiment, the relative error rate of the isolated word recognition system declines 6.10% and 9.66%, respectively, and the relative error rate of the continuous speech recognition system declines 2.53% and 11.05%, respectively.