BLiMP: A Benchmark of Linguistic Minimal Pairs for English

We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),1 a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4%. We evaluate n-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands.

[1]  Alex Wang,et al.  What do you learn from context? Probing for sentence structure in contextualized word representations , 2019, ICLR.

[2]  Samuel R. Bowman,et al.  Neural Network Acceptability Judgments , 2018, Transactions of the Association for Computational Linguistics.

[3]  Lukás Burget,et al.  Recurrent neural network based language model , 2010, INTERSPEECH.

[4]  Shikha Bordia,et al.  Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs , 2019, EMNLP.

[5]  Philipp Koehn,et al.  Scalable Modified Kneser-Ney Language Model Estimation , 2013, ACL.

[6]  Dan Klein,et al.  Faster and Smaller N-Gram Language Models , 2011, ACL.

[7]  K. Bock,et al.  Broken agreement , 1991, Cognitive Psychology.

[8]  Roger Levy,et al.  RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency , 2018, ArXiv.

[9]  Omer Levy,et al.  GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.

[10]  Edouard Grave,et al.  Colorless Green Recurrent Networks Dream Hierarchically , 2018, NAACL.

[11]  Richard Socher,et al.  Pointer Sentinel Mixture Models , 2016, ICLR.

[12]  Nitin Madnani,et al.  Predicting Grammaticality on an Ordinal Scale , 2014, ACL.

[13]  Xing Shi,et al.  Does String-Based Neural MT Learn Source Syntax? , 2016, EMNLP.

[14]  Noam Chomsky,et al.  वाक्यविन्यास का सैद्धान्तिक पक्ष = Aspects of the theory of syntax , 1965 .

[15]  Norbert Hornstein,et al.  Logic as Grammar , 1984 .

[16]  Carson T. Schütze The empirical base of linguistics: Grammaticality judgments and linguistic methodology , 1998 .

[17]  Rui P. Chaves,et al.  Assessing the ability of Transformer-based Neural Models to represent structurally unbounded dependencies , 2020, SCIL.

[18]  Dieuwke Hupkes,et al.  Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items , 2018, BlackboxNLP@EMNLP.

[19]  Roger Levy,et al.  What do RNN Language Models Learn about Filler–Gap Dependencies? , 2018, BlackboxNLP@EMNLP.

[20]  F ChenStanley,et al.  An Empirical Study of Smoothing Techniques for Language Modeling , 1996, ACL.

[21]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[22]  Peng Qian,et al.  Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study , 2019, EMNLP.

[23]  Samuel R. Bowman,et al.  Verb Argument Structure Alternations in Word and Sentence Embeddings , 2018, ArXiv.

[24]  Sebastian Ruder,et al.  Universal Language Model Fine-tuning for Text Classification , 2018, ACL.

[25]  Yonatan Belinkov,et al.  Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks , 2016, ICLR.

[26]  Betty J. Birner,et al.  Definiteness and the English Existential , 1995 .

[27]  Thorsten Brants,et al.  Large Language Models in Machine Translation , 2007, EMNLP.

[28]  Hermann Ney,et al.  LSTM Neural Networks for Language Modeling , 2012, INTERSPEECH.

[29]  Guillaume Lample,et al.  What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties , 2018, ACL.

[30]  B. Geurts,et al.  At least et al: The semantic of scalar modifiers , 2007 .

[31]  Morten H. Christiansen,et al.  Uncovering the Richness of the Stimulus: Structure Dependence and Indirect Statistical Evidence , 2005, Cogn. Sci..

[32]  Allyson Ettinger,et al.  Assessing Composition in Sentence Vector Representations , 2018, COLING.

[33]  Alec Marantz,et al.  Verbal argument structure: Events and participants , 2013 .

[34]  S. A. Chowdhury,et al.  RNN Simulations of Grammaticality Judgments on Long-distance Dependencies , 2018, COLING.

[35]  Samuel R. Bowman,et al.  Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments , 2019, ArXiv.

[36]  Jon Sprouse,et al.  Investigating variation in island effects , 2017, Natural Language & Linguistic Theory.

[37]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[38]  S. A. Chowdhury,et al.  An LSTM Adaptation Study of (Un)grammaticality , 2019, BlackboxNLP@ACL.

[39]  Roger Levy,et al.  Structural Supervision Improves Learning of Non-Local Grammatical Dependencies , 2019, NAACL.

[40]  David Adger,et al.  Core Syntax: A Minimalist Approach , 2003 .

[41]  Tal Linzen,et al.  Targeted Syntactic Evaluation of Language Models , 2018, EMNLP.

[42]  Samuel R. Bowman,et al.  Linguistic Analysis of Pretrained Sentence Encoders with Acceptability Judgments , 2019 .

[43]  Edward P. Stabler,et al.  An Introduction To Syntactic Analysis And Theory , 2016 .

[44]  Kenneth Heafield,et al.  KenLM: Faster and Smaller Language Model Queries , 2011, WMT@EMNLP.

[45]  Alex Wang,et al.  jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models , 2020, ACL.

[46]  Rui P. Chaves,et al.  What Don’t RNN Language Models Learn About Filler-Gap Dependencies? , 2020, SCIL.

[47]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[48]  Alexander Clark,et al.  Grammaticality, Acceptability, and Probability: A Probabilistic View of Linguistic Knowledge , 2017, Cogn. Sci..

[49]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[50]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[51]  Yiming Yang,et al.  Transformer-XL: Attentive Language Models beyond a Fixed-Length Context , 2019, ACL.

[52]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[53]  Omer Levy,et al.  SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.

[54]  Emmanuel Dupoux,et al.  Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies , 2016, TACL.