暂无分享,去创建一个
Sampo Pyysalo | Filip Ginter | Jenna Kanerva | Li-Hsin Chang | Jenna Kanerva | Filip Ginter | Sampo Pyysalo | Li-Hsin Chang
[1] Chris Brockett,et al. Automatically Constructing a Corpus of Sentential Paraphrases , 2005, IJCNLP.
[2] Samuel R. Bowman,et al. Neural Network Acceptability Judgments , 2018, Transactions of the Association for Computational Linguistics.
[3] Tapio Salakoski,et al. Multilingual is not enough: BERT for Finnish , 2019, ArXiv.
[4] Peter Clark,et al. The Seventh PASCAL Recognizing Textual Entailment Challenge , 2011, TAC.
[5] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[6] Anna Rumshisky,et al. Revealing the Dark Secrets of BERT , 2019, EMNLP.
[7] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[8] Mikel Artetxe,et al. On the Cross-lingual Transferability of Monolingual Representations , 2019, ACL.
[9] Omer Levy,et al. Annotation Artifacts in Natural Language Inference Data , 2018, NAACL.
[10] Atro Voutilainen,et al. Specifying Treebanks, Outsourcing Parsebanks: FinnTreeBank 3 , 2012, LREC.
[11] Matej Ulvcar,et al. FinEst BERT and CroSloEngual BERT: less is more in multilingual models , 2020, TDS.
[12] Marko Robnik-Sikonja,et al. FinEst BERT and CroSloEngual BERT: less is more in multilingual models , 2020, TDS.
[13] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[14] Ido Dagan,et al. The Sixth PASCAL Recognizing Textual Entailment Challenge , 2009, TAC.
[15] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[16] Rico Sennrich,et al. Neural Machine Translation of Rare Words with Subword Units , 2015, ACL.
[17] Miikka Silfverberg,et al. A Finnish news corpus for named entity recognition , 2019, Language Resources and Evaluation.
[18] Nizar Habash,et al. CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies , 2017, CoNLL.
[19] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.
[20] Jan Pomikálek. Removing Boilerplate and Duplicate Content from Web Corpora , 2011 .
[21] Martin Potthast,et al. CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies , 2018, CoNLL.
[22] Veselin Stoyanov,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[23] Sampo Pyysalo,et al. WikiBERT Models: Deep Transfer Learning for Many Languages , 2020, NODALIDA.
[24] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[25] Sampo Pyysalo,et al. Universal Dependencies v1: A Multilingual Treebank Collection , 2016, LREC.
[26] Ido Dagan,et al. The Third PASCAL Recognizing Textual Entailment Challenge , 2007, ACL-PASCAL@ACL.
[27] Eneko Agirre,et al. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation , 2017, *SEMEVAL.
[28] Dan Roth,et al. Cross-Lingual Ability of Multilingual BERT: An Empirical Study , 2019, ICLR.
[29] Eva Schlinger,et al. How Multilingual is Multilingual BERT? , 2019, ACL.
[30] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[31] Tommaso Caselli,et al. BERTje: A Dutch BERT Model , 2019, ArXiv.
[32] Hector J. Levesque,et al. The Winograd Schema Challenge , 2011, AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.
[33] Mark Dredze,et al. Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT , 2019, EMNLP.
[34] Taku Kudo,et al. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing , 2018, EMNLP.
[35] Laurent Romary,et al. CamemBERT: a Tasty French Language Model , 2019, ACL.
[36] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[37] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[38] Roy Bar-Haim,et al. The Second PASCAL Recognising Textual Entailment Challenge , 2006 .
[39] Dan Klein,et al. Multilingual Alignment of Contextual Word Representations , 2020, ICLR.
[40] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[41] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[42] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[43] J Quinonero Candela,et al. Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment , 2006, Lecture Notes in Computer Science.
[44] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[45] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[46] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[47] Daniel Kondratyuk,et al. 75 Languages, 1 Model: Parsing Universal Dependencies Universally , 2019, EMNLP.
[48] Veronika Laippala,et al. Universal Dependencies for Finnish , 2015, NODALIDA.
[49] Yejin Choi,et al. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference , 2018, EMNLP.
[50] James Demmel,et al. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes , 2019, ICLR.
[51] Anna Rumshisky,et al. A Primer in BERTology: What We Know About How BERT Works , 2020, Transactions of the Association for Computational Linguistics.
[52] Ido Dagan,et al. The Third PASCAL Recognizing Textual Entailment Challenge , 2007, ACL-PASCAL@ACL.