End-to-End Architectures for ASR-Free Spoken Language Understanding
暂无分享,去创建一个
[1] Yiming Wang,et al. Purely Sequence-Trained Neural Networks for ASR Based on Lattice-Free MMI , 2016, INTERSPEECH.
[2] Daniel Povey,et al. MUSAN: A Music, Speech, and Noise Corpus , 2015, ArXiv.
[3] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.
[4] Yannick Estève,et al. Investigating Adaptation and Transfer Learning for End-to-End Spoken Language Understanding from Speech , 2019, INTERSPEECH.
[5] Gökhan Tür,et al. Beyond ASR 1-best: Using word confusion networks in spoken language understanding , 2006, Comput. Speech Lang..
[6] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[7] Gokhan Tur,et al. Spoken Language Understanding: Systems for Extracting Semantic Information from Speech , 2011 .
[8] Yoshua Bengio,et al. Speech Model Pre-training for End-to-End Spoken Language Understanding , 2019, INTERSPEECH.
[9] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[10] Gökhan Tür,et al. Sequential Dialogue Context Modeling for Spoken Language Understanding , 2017, SIGDIAL Conference.
[11] Srinivas Bangalore,et al. Spoken Language Understanding without Speech Recognition , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[12] David Suendermann-Oeft,et al. Exploring ASR-free end-to-end modeling to improve spoken language understanding in a cloud-based dialog system , 2017, 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
[13] Chih-Li Huo,et al. Slot-Gated Modeling for Joint Slot Filling and Intent Prediction , 2018, NAACL.
[14] Alex Acero,et al. Speech Utterance Classification Model Training without Manual Transcriptions , 2006, 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.
[15] Panayiotis G. Georgiou,et al. Spoken Language Intent Detection using Confusion2Vec , 2019, INTERSPEECH.
[16] Samy Bengio,et al. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.
[17] Arun Narayanan,et al. From Audio to Semantics: Approaches to End-to-End Spoken Language Understanding , 2018, 2018 IEEE Spoken Language Technology Workshop (SLT).
[18] Daniel Povey,et al. The Kaldi Speech Recognition Toolkit , 2011 .
[19] Pushpak Bhattacharyya,et al. A Deep Learning Based Multi-task Ensemble Model for Intent Detection and Slot Filling in Spoken Language Understanding , 2018, ICONIP.
[20] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[21] Philip S. Yu,et al. Joint Slot Filling and Intent Detection via Capsule Neural Networks , 2018, ACL.
[22] Yongqiang Wang,et al. Towards End-to-end Spoken Language Understanding , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).