What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models
暂无分享,去创建一个
[1] Dipanjan Das,et al. BERT Rediscovers the Classical NLP Pipeline , 2019, ACL.
[2] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[3] Noah D. Goodman,et al. Evaluating Compositionality in Sentence Embeddings , 2018, CogSci.
[4] Roger Levy,et al. Neural language models as psycholinguistic subjects: Representations of syntactic state , 2019, NAACL.
[5] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[6] Roger Levy,et al. What do RNN Language Models Learn about Filler–Gap Dependencies? , 2018, BlackboxNLP@EMNLP.
[7] Gabriella Vigliocco,et al. Word surprisal predicts N400 amplitude during reading , 2013, ACL.
[8] Robert Frank,et al. Open Sesame: Getting inside BERT’s Linguistic Knowledge , 2019, BlackboxNLP@ACL.
[9] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[10] Eneko Agirre,et al. SemEval-2012 Task 6: A Pilot on Semantic Textual Similarity , 2012, *SEMEVAL.
[11] M. Kutas,et al. Brain potentials during reading reflect word expectancy and semantic association , 1984, Nature.
[12] Yonatan Belinkov,et al. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks , 2016, ICLR.
[13] S. A. Chowdhury,et al. RNN Simulations of Grammaticality Judgments on Long-distance Dependencies , 2018, COLING.
[14] Allyson Ettinger,et al. Assessing Composition in Sentence Vector Representations , 2018, COLING.
[15] Marco Baroni,et al. The emergence of number and syntax units in LSTM language models , 2019, NAACL.
[16] Luke S. Zettlemoyer,et al. Dissecting Contextual Word Embeddings: Architecture and Representation , 2018, EMNLP.
[17] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[18] Tal Linzen,et al. Targeted Syntactic Evaluation of Language Models , 2018, EMNLP 2018.
[19] Florian Mohnert,et al. Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information , 2018, BlackboxNLP@EMNLP.
[20] Dieuwke Hupkes,et al. Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items , 2018, BlackboxNLP@EMNLP.
[21] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[22] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[23] Alex Wang,et al. Probing What Different NLP Tasks Teach Machines about Function Word Comprehension , 2019, *SEMEVAL.
[24] Guillaume Lample,et al. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties , 2018, ACL.
[25] Edouard Grave,et al. Colorless Green Recurrent Networks Dream Hierarchically , 2018, NAACL.
[26] Ido Dagan,et al. The Third PASCAL Recognizing Textual Entailment Challenge , 2007, ACL-PASCAL@ACL.
[27] Sandro Pezzelle,et al. The LAMBADA dataset: Word prediction requiring a broad discourse context , 2016, ACL.
[28] Marco Marelli,et al. SICK through the SemEval glasses. Lesson learned from the evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment , 2016, Language Resources and Evaluation.
[29] Colin Phillips,et al. A “bag-of-arguments” mechanism for initial verb predictions , 2016 .
[30] Yoav Goldberg,et al. Assessing BERT's Syntactic Abilities , 2019, ArXiv.
[31] Mante S. Nieuwland,et al. When the Truth Is Not Too Hard to Handle , 2008, Psychological science.
[32] Kara D. Federmeier,et al. A Rose by Any Other Name: Long-Term Memory Structure and Sentence Processing , 1999 .
[33] R. Thomas McCoy,et al. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference , 2019, ACL.
[34] Emmanuel Dupoux,et al. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies , 2016, TACL.
[35] Rachel Rudinger,et al. Collecting Diverse Natural Language Inference Problems for Sentence Representation Evaluation , 2018, BlackboxNLP@EMNLP.
[36] Alex Wang,et al. What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.
[37] Salim Roukos,et al. Brain potentials related to stages of sentence verification. , 1983, Psychophysiology.