暂无分享,去创建一个
Sampo Pyysalo | Filip Ginter | Jenna Kanerva | Antti Virtanen | Jenna Kanerva | Filip Ginter | Sampo Pyysalo | Antti Virtanen
[1] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[2] Daniel Kondratyuk,et al. 75 Languages, 1 Model: Parsing Universal Dependencies Universally , 2019, EMNLP.
[3] Sampo Pyysalo,et al. The birth of Romanian BERT , 2020, FINDINGS.
[4] Mikhail Arkhipov,et al. Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language , 2019, ArXiv.
[5] Laurent Romary,et al. CamemBERT: a Tasty French Language Model , 2019, ACL.
[6] Philip Gage,et al. A new algorithm for data compression , 1994 .
[7] Tapio Salakoski,et al. Multilingual is not enough: BERT for Finnish , 2019, ArXiv.
[8] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[9] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[10] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[11] Sampo Pyysalo,et al. Intrinsic Evaluation of Word Vectors Fails to Predict Extrinsic Performance , 2016, RepEval@ACL.
[12] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[13] Veselin Stoyanov,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[14] Matej Ulvcar,et al. FinEst BERT and CroSloEngual BERT: less is more in multilingual models , 2020, TDS.
[15] Tommaso Caselli,et al. BERTje: A Dutch BERT Model , 2019, ArXiv.
[16] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[17] Anders Holst,et al. Random indexing of text samples for latent semantic analysis , 2000 .
[18] Jan Hajic,et al. Neural Architectures for Nested NER through Linearization , 2019, ACL.
[19] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[20] Colin Raffel,et al. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer , 2021, NAACL.
[21] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[22] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[23] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[24] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[25] Sampo Pyysalo,et al. Universal Dependencies v1: A Multilingual Treebank Collection , 2016, LREC.
[26] Eva Schlinger,et al. How Multilingual is Multilingual BERT? , 2019, ACL.
[27] Sampo Pyysalo,et al. Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection , 2020, LREC.
[28] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[29] Jan Hajic,et al. UDPipe: Trainable Pipeline for Processing CoNLL-U Files Performing Tokenization, Morphological Analysis, POS Tagging and Parsing , 2016, LREC.
[30] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[31] Taku Kudo,et al. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing , 2018, EMNLP.
[32] Sampo Pyysalo,et al. Towards Fully Bilingual Deep Language Modeling , 2020, ArXiv.
[33] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.