Mixture of mixture n-gram language models
暂无分享,去创建一个
Cyril Allauzen | Hasim Sak | Françoise Beaufays | Kaisuke Nakajima | Kaisuke Nakajima | F. Beaufays | Cyril Allauzen | Hasim Sak
[1] Cyril Allauzen,et al. Bayesian Language Model Interpolation for Mobile Speech Input , 2011, INTERSPEECH.
[2] Navdeep Jaitly,et al. Application of Pretrained Deep Neural Networks to Large Vocabulary Speech Recognition , 2012, INTERSPEECH.
[3] Johan Schalkwyk,et al. On-demand language model interpolation for mobile speech input , 2010, INTERSPEECH.
[4] Jerome R. Bellegarda,et al. Statistical language model adaptation: review and perspectives , 2004, Speech Commun..
[5] Andreas Stolcke,et al. Entropy-based Pruning of Backoff Language Models , 2000, ArXiv.
[6] Karthik Visweswariah,et al. Language models conditioned on dialog state , 2001, INTERSPEECH.
[7] Wei Xu,et al. Language modeling for dialog system , 2000, INTERSPEECH.
[8] Mari Ostendorf,et al. Modeling long distance dependence in language: topic mixtures versus dynamic cache models , 1996, IEEE Trans. Speech Audio Process..
[9] Hermann Ney,et al. A COMPARISON OF DIALOGUE-STATE DEPENDENT LANGUAGE MODELS , 2007 .
[10] Cyril Allauzen,et al. Language model verbalization for automatic speech recognition , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[11] Reinhard Kneser,et al. On the dynamic adaptation of stochastic language models , 1993, 1993 IEEE International Conference on Acoustics, Speech, and Signal Processing.