BLiMP: A Benchmark of Linguistic Minimal Pairs for English
暂无分享,去创建一个
Samuel R. Bowman | Sheng-Fu Wang | Wei Peng | Haokun Liu | Alex Warstadt | Alicia Parrish | Anhad Mohananey | Alex Warstadt | Haokun Liu | Wei Peng | Alicia Parrish | Sheng-Fu Wang | Anhad Mohananey
[1] Samuel R. Bowman,et al. jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models , 2020, ACL.
[2] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[3] Rui P. Chaves,et al. What Don’t RNN Language Models Learn About Filler-Gap Dependencies? , 2020, SCIL.
[4] Rui P. Chaves,et al. Assessing the ability of Transformer-based Neural Models to represent structurally unbounded dependencies , 2020, SCIL.
[5] Shikha Bordia,et al. Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs , 2019, EMNLP.
[6] Peng Qian,et al. Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study , 2019, EMNLP.
[7] S. A. Chowdhury,et al. An LSTM Adaptation Study of (Un)grammaticality , 2019, BlackboxNLP@ACL.
[8] Alex Wang,et al. What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.
[9] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[10] Roger Levy,et al. Structural Supervision Improves Learning of Non-Local Grammatical Dependencies , 2019, NAACL.
[11] Samuel R. Bowman,et al. Linguistic Analysis of Pretrained Sentence Encoders with Acceptability Judgments , 2019 .
[12] Samuel R. Bowman,et al. Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments , 2019, ArXiv.
[13] Yiming Yang,et al. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context , 2019, ACL.
[14] Samuel R. Bowman,et al. Neural Network Acceptability Judgments , 2018, Transactions of the Association for Computational Linguistics.
[15] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[16] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[17] Samuel R. Bowman,et al. Verb Argument Structure Alternations in Word and Sentence Embeddings , 2018, ArXiv.
[18] Roger Levy,et al. RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency , 2018, ArXiv.
[19] Roger Levy,et al. What do RNN Language Models Learn about Filler–Gap Dependencies? , 2018, BlackboxNLP@EMNLP.
[20] Dieuwke Hupkes,et al. Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items , 2018, BlackboxNLP@EMNLP.
[21] Tal Linzen,et al. Targeted Syntactic Evaluation of Language Models , 2018, EMNLP.
[22] Jon Sprouse,et al. Investigating variation in island effects , 2017, Natural Language & Linguistic Theory.
[23] S. A. Chowdhury,et al. RNN Simulations of Grammaticality Judgments on Long-distance Dependencies , 2018, COLING.
[24] Allyson Ettinger,et al. Assessing Composition in Sentence Vector Representations , 2018, COLING.
[25] Guillaume Lample,et al. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties , 2018, ACL.
[26] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[27] Edouard Grave,et al. Colorless Green Recurrent Networks Dream Hierarchically , 2018, NAACL.
[28] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[29] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[30] Alexander Clark,et al. Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge , 2017, Cogn. Sci..
[31] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[32] Richard Socher,et al. Pointer Sentinel Mixture Models , 2016, ICLR.
[33] Yonatan Belinkov,et al. Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks , 2016, ICLR.
[34] Emmanuel Dupoux,et al. Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies , 2016, TACL.
[35] Xing Shi,et al. Does String-Based Neural MT Learn Source Syntax? , 2016, EMNLP.
[36] Edward P. Stabler,et al. An Introduction To Syntactic Analysis And Theory , 2016 .
[37] Nitin Madnani,et al. Predicting Grammaticality on an Ordinal Scale , 2014, ACL.
[38] Edward P. Stabler,et al. An Introduction to Syntactic Analysis and Theory , 2013 .
[39] G. Chierchia,et al. Logic in Grammar: Polarity, Free Choice, and Intervention , 2013 .
[40] Philipp Koehn,et al. Scalable Modified Kneser-Ney Language Model Estimation , 2013, ACL.
[41] Alec Marantz,et al. Verbal argument structure: Events and participants , 2013 .
[42] Hermann Ney,et al. LSTM Neural Networks for Language Modeling , 2012, INTERSPEECH.
[43] Kenneth Heafield,et al. KenLM: Faster and Smaller Language Model Queries , 2011, WMT@EMNLP.
[44] Dan Klein,et al. Faster and Smaller N-Gram Language Models , 2011, ACL.
[45] Lukás Burget,et al. Recurrent neural network based language model , 2010, INTERSPEECH.
[46] B. Geurts,et al. At least et al: The semantic of scalar modifiers , 2007 .
[47] Thorsten Brants,et al. Large Language Models in Machine Translation , 2007, EMNLP.
[48] Morten H. Christiansen,et al. Uncovering the Richness of the Stimulus: Structure Dependence and Indirect Statistical Evidence , 2005, Cogn. Sci..
[49] David Adger,et al. Core Syntax: A Minimalist Approach , 2003 .
[50] Carson T. Schütze. The empirical base of linguistics: Grammaticality judgments and linguistic methodology , 1998 .
[51] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[52] Stanley F. Chen,et al. An Empirical Study of Smoothing Techniques for Language Modeling , 1996, ACL.
[53] Betty J. Birner,et al. Definiteness and the English Existential , 1995 .
[54] K. Bock,et al. Broken agreement , 1991, Cognitive Psychology.
[55] Norbert Hornstein,et al. Logic as Grammar , 1984 .