Recognition performance of a large-scale dependency grammar language model

In this paper, we describe a large-scale investigation of dependency grammar language models. Our work includes several signi cant departures from earlier studies, notably a larger training corpus, improved model structure, different feature types, new feature selection methods, and more coherent training and test data. We report word error rate (wer) results of a speech recognition experiment, in which we used these models to rescore the output of the IBM speech recognition system.