暂无分享,去创建一个
Aitor Gonzalez-Agirre | Maite Melero | Jordi Armengol-Estap'e | Carme Armentano-Oller | Casimiro Pio Carrino | Marta Villegas | Carlos Rodriguez-Penagos | Ona de Gibert Bonet | Maite Melero | A. Gonzalez-Agirre | C. Carrino | Carme Armentano-Oller | Carlos Rodríguez-Penagos | Jordi Armengol-Estap'e | Marta Villegas | Aitor Gonzalez-Agirre
[1] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[2] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[3] Eneko Agirre,et al. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity , 2012, *SEMEVAL.
[4] Sampo Pyysalo,et al. WikiBERT Models: Deep Transfer Learning for Many Languages , 2020, NODALIDA.
[5] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[6] Mikel Artetxe,et al. On the Cross-lingual Transferability of Monolingual Representations , 2019, ACL.
[7] Eneko Agirre,et al. Give your Text Representation Models some Love: the Case for Basque , 2020, LREC.
[8] Jörg Tiedemann,et al. Parallel Data, Tools and Interfaces in OPUS , 2012, LREC.
[9] Martin d'Hoffschmidt,et al. FQuAD: French Question Answering Dataset , 2020, FINDINGS.
[10] Veselin Stoyanov,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[11] Tapio Salakoski,et al. Multilingual is not enough: BERT for Finnish , 2019, ArXiv.
[12] Quoc V. Le,et al. Distributed Representations of Sentences and Documents , 2014, ICML.
[13] Sebastian Riedel,et al. MLQA: Evaluating Cross-lingual Extractive Question Answering , 2019, ACL.
[14] Monojit Choudhury,et al. The State and Fate of Linguistic Diversity and Inclusion in the NLP World , 2020, ACL.
[15] Laurent Romary,et al. CamemBERT: a Tasty French Language Model , 2019, ACL.
[16] Antonio Toral,et al. caWaC – A web corpus of Catalan and its application to language modeling and machine translation , 2014, LREC.
[17] Tommaso Caselli,et al. BERTje: A Dutch BERT Model , 2019, ArXiv.
[18] Dirk Hovy,et al. What the [MASK]? Making Sense of Language-Specific BERT Models , 2020, ArXiv.
[19] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[20] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[21] Martin Boeker,et al. GottBERT: a pure German Language Model , 2020, ArXiv.
[22] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[24] Tomas Mikolov,et al. Enriching Word Vectors with Subword Information , 2016, TACL.
[25] Benoît Sagot,et al. Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures , 2019 .
[26] Emily M. Bender,et al. Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science , 2018, TACL.
[27] Guillaume Lample,et al. Cross-lingual Language Model Pretraining , 2019, NeurIPS.
[28] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.
[29] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[30] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[31] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[32] Vlado Keselj,et al. Slavic language identification using cascade classifier approach , 2018, 2018 17th International Symposium INFOTEH-JAHORINA (INFOTEH).
[33] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .