Reconstructed Option Rereading Network for Opinion Questions Reading Comprehension

Multiple-choice reading comprehension task has seen a recent surge of popularity, aiming at choosing the correct option from candidate options for the question referring to a related passage. Previous work focuses on factoid-based questions but ignore opinion-based questions. Options of opinion-based questions are usually sentiment phrases, such as “Good” or “Bad”. It causes that previous work fail to model the interactive information among passage, question and options, because their approaches are based on the premise that options contain rich semantic information. To this end, we propose a Reconstructed Option Rereading Network (RORN) to tackle it. We first reconstruct the options based on question. Then, the model utilize the reconstructed options to generate the representation of options. Finally, we fed into a max-pooling layer to obtain the ranking score for each opinion. Experiments show that our proposed achieve state-of-art performance on the Chinese opinion questions machine reading comprehension datasets in AI challenger competition.

[1]  Jian Zhang,et al.  SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.

[2]  Matthew Richardson,et al.  MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.

[3]  Siu Cheung Hui,et al.  Multi-range Reasoning for Machine Comprehension , 2018, ArXiv.

[4]  Shiyu Chang,et al.  A Co-Matching Model for Multi-choice Reading Comprehension , 2018, ACL.

[5]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[6]  Danqi Chen,et al.  A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task , 2016, ACL.

[7]  Philip Bachman,et al.  NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.

[8]  Richard Socher,et al.  Dynamic Coattention Networks For Question Answering , 2016, ICLR.

[9]  Ming Zhou,et al.  Multiway Attention Networks for Modeling Sentence Pairs , 2018, IJCAI.

[10]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[11]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[12]  Ali Farhadi,et al.  Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.

[13]  Claire Cardie,et al.  Improving Machine Reading Comprehension with General Reading Strategies , 2018, NAACL.

[14]  Guokun Lai,et al.  RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.

[15]  Ming Zhou,et al.  Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.

[16]  Mitesh M. Khapra,et al.  ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions , 2018, IJCAI.

[17]  Peng Li,et al.  Option Comparison Network for Multiple-choice Reading Comprehension , 2019, ArXiv.

[18]  Xiaodong Liu,et al.  Towards Human-level Machine Reading Comprehension: Reasoning and Inference with Multiple Strategies , 2017, ArXiv.

[19]  Alec Radford,et al.  Improving Language Understanding by Generative Pre-Training , 2018 .

[20]  Philip Bachman,et al.  A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data , 2016, ACL.

[21]  Wenpeng Yin,et al.  Attention-Based Convolutional Neural Network for Machine Comprehension , 2016, ArXiv.

[22]  Hai Zhao,et al.  Dual Co-Matching Network for Multi-choice Reading Comprehension , 2020, AAAI.

[23]  Ruslan Salakhutdinov,et al.  Gated-Attention Readers for Text Comprehension , 2016, ACL.

[24]  Furu Wei,et al.  Hierarchical Attention Flow for Multiple-Choice Reading Comprehension , 2018, AAAI.