Lightly supervised acoustic model training using consensus networks

The paper presents some recent work on using consensus networks to improve lightly supervised acoustic model training for the LIMSI Mandarin BN system. Lightly supervised acoustic model training has been attracting growing interest, since it can help to reduce the development costs for speech recognition systems substantially. Compared to supervised training with accurate transcriptions, the key problem in lightly supervised training is getting the approximate transcripts to be as close as possible to manually produced detailed ones, i.e., finding a proper way to provide the information for supervision. Previous work using a language model to provide supervision has been quite successful. The paper extends the original method by presenting a new way to get the information needed for supervision during training. Studies are carried out using the TDT4 Mandarin audio corpus and associated closed-captions. After automatically recognizing the training data, the closed-captions are aligned with a consensus network derived from the hypothesized lattices. As is the case with closed-caption filtering, this method can remove speech segments whose automatic transcripts contain errors, but it can also recover errors in the hypothesis if the information is present in the lattice. Experimental results show that, compared with simply training on all of the data, consensus network based lightly supervised acoustic model training results in a small reduction in the character error rate on the DARPA/NIST RT'03 development and evaluation data.