Improving speech understanding accuracy with limited training data using multiple language models and multiple understanding models
暂无分享,去创建一个
Tetsuya Ogata | Hiroshi G. Okuno | Mikio Nakano | Kazunori Komatani | Kotaro Funakoshi | Masaki Katsumaru
[1] Tetsuya Ogata,et al. Rapid Prototyping of Robust Language Understanding Modules for Spoken Dialogue Systems , 2008, IJCNLP.
[2] Heinrich Niemann,et al. Combining stochastic and linguistic language models for recognition of spontaneous speech , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.
[3] Mikio Nakano,et al. A Framework for Building Conversational Agents Based on a Multi-Expert Model , 2008, SIGDIAL Workshop.
[4] Dilek Z. Hakkani-Tür,et al. Spoken language understanding , 2008, IEEE Signal Processing Magazine.
[5] Kiyohiro Shikano,et al. Recent progress of open-source LVCSR engine julius and Japanese model repository , 2004, INTERSPEECH.
[6] Tetsuya Ogata,et al. A Speech Understanding Framework that Uses Multiple Language Models and Multiple Understanding Models , 2009, HLT-NAACL.
[7] Tatsuya Kawahara,et al. Flexible Mixed-Initiative Dialogue Management using Concept-Level Confidence Measures of Speech Recognizer Output , 2000, COLING.
[8] Hermann Ney,et al. System combination for spoken language understanding , 2008, INTERSPEECH.
[9] Jonathan G. Fiscus,et al. A post-processing system to yield reduced word error rates: Recognizer Output Voting Error Reduction (ROVER) , 1997, 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings.
[10] Kenji Araki,et al. Analysis of User Reactions to Turn-Taking Failures in Spoken Dialogue Systems , 2007, SIGdial.