Numerical reasoning in machine reading comprehension tasks: are we there yet?

Numerical reasoning based machine reading comprehension is a task that involves reading comprehension along with using arithmetic operations such as addition, subtraction, sorting and counting. The DROP benchmark (Dua et al., 2019) is a recent dataset that has inspired the design of NLP models aimed at solving this task. The current standings of these models in the DROP leaderboard, over standard metrics, suggests that the models have achieved near-human performance. However, does this mean that these models have learned to reason? In this paper, we present a controlled study on some of the top-performing model architectures for the task of numerical reasoning. Our observations suggest that the standard metrics are incapable of measuring progress towards such tasks.

[1]  Douwe Kiela,et al.  Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little , 2021, EMNLP.

[2]  Graham Neubig,et al.  ExplainaBoard: An Explainable Leaderboard for NLP , 2021, ACL.

[3]  Pedro A. Szekely,et al.  Representing Numbers in NLP: a Survey and a Vision , 2021, NAACL.

[4]  Vivek Srikumar,et al.  BERT & Family Eat Word Salad: Experiments with Text Understanding , 2021, AAAI.

[5]  Joelle Pineau,et al.  UnNatural Language Inference , 2020, ACL.

[6]  Long Mai,et al.  Out of Order: How important is the sequential order of words in a sentence in Natural Language Understanding tasks? , 2020, FINDINGS.

[7]  Goran Nenadic,et al.  Semantics Altering Modifications for Evaluating Comprehension in Machine Reading , 2020, AAAI.

[8]  Pham Quang Nhat Minh An Empirical Study of Using Pre-trained BERT Models for Vietnamese Relation Extraction Task at VLSP 2020 , 2020, VLSP.

[9]  Tal Linzen,et al.  How Can We Accelerate Progress Towards Human-like Linguistic Generalization? , 2020, ACL.

[10]  Quoc V. Le,et al.  Neural Symbolic Reader: Scalable Integration of Distributed and Symbolic Representations for Reading Comprehension , 2020, ICLR.

[11]  Noah A. Smith,et al.  Evaluating Models’ Local Decision Boundaries via Contrast Sets , 2020, FINDINGS.

[12]  Jonathan Berant,et al.  Injecting Numerical Reasoning Skills into Language Models , 2020, ACL.

[13]  Jonathan Berant,et al.  A Simple and Effective Model for Answering Multi-span Questions , 2019, EMNLP.

[14]  Jonathan Berant,et al.  On Making Reading Comprehension More Comprehensive , 2019, EMNLP.

[15]  Ido Dagan,et al.  Diversify Your Datasets: Analyzing Generalization via Controlled Variance in Adversarial Datasets , 2019, CoNLL.

[16]  Zhiyuan Liu,et al.  NumNet: Machine Reading Comprehension with Numerical Reasoning , 2019, EMNLP.

[17]  Zhen Huang,et al.  A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning , 2019, EMNLP.

[18]  Omer Levy,et al.  RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.

[19]  Gabriel Stanovsky,et al.  DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs , 2019, NAACL.

[20]  R. Thomas McCoy,et al.  Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference , 2019, ACL.

[21]  Carolyn Penstein Rosé,et al.  EQUATE: A Benchmark Evaluation Framework for Quantitative Reasoning in Natural Language Inference , 2019, CoNLL.

[22]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[23]  Percy Liang,et al.  Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.

[24]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[25]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.