Neural Networks approaches focused on French Spoken Language Understanding: application to the MEDIA Evaluation Task
暂无分享,去创建一个
[1] Frédéric Béchet,et al. Benchmarking Benchmarks: Introducing New Automatic Indicators for Benchmarking Spoken Language Understanding Corpora , 2019, INTERSPEECH.
[2] Geoffrey Zweig,et al. Joint semantic utterance classification and slot filling with recursive neural networks , 2014, 2014 IEEE Spoken Language Technology Workshop (SLT).
[3] Geoffrey Zweig,et al. Spoken language understanding using long short-term memory neural networks , 2014, 2014 IEEE Spoken Language Technology Workshop (SLT).
[4] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[5] Geoffrey Zweig,et al. Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[6] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[7] Ronan Collobert,et al. Is Deep Learning Really Necessary for Word Embeddings , 2013 .
[8] Eduard H. Hovy,et al. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF , 2016, ACL.
[9] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[10] Laurent Romary,et al. CamemBERT: a Tasty French Language Model , 2019, ACL.
[11] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[12] Tomas Mikolov,et al. Enriching Word Vectors with Subword Information , 2016, TACL.
[13] Edwin Simonnet,et al. Réseaux de neurones profonds appliqués à la compréhension de la parole. (Deep learning applied to spoken langage understanding) , 2019 .
[14] Houfeng Wang,et al. A Joint Model of Intent Determination and Slot Filling for Spoken Language Understanding , 2016, IJCAI.
[15] Frédéric Béchet,et al. Results of the French Evalda-Media evaluation campaign for literal understanding , 2006, LREC.
[16] Sophie Rosset,et al. What is best for spoken language understanding: small but task-dependant embeddings or huge but out-of-domain embeddings? , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[17] James R. Glass,et al. A Comparison of Deep Learning Methods for Language Understanding , 2019, INTERSPEECH.
[18] Gokhan Tur,et al. Spoken Language Understanding: Systems for Extracting Semantic Information from Speech , 2011 .
[19] Christian Raymond,et al. Label-Dependency Coding in Simple Recurrent Networks for Spoken Language Understanding , 2017, INTERSPEECH.
[20] Renato De Mori,et al. ASR Error Management for Improving Spoken Language Understanding , 2017, INTERSPEECH.
[21] Benjamin Lecouteux,et al. FlauBERT: Unsupervised Language Model Pre-training for French , 2020, LREC.
[22] Benoît Sagot,et al. Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures , 2019 .