FlowDelta: Modeling Flow Information Gain in Reasoning for Conversational Machine Comprehension

Conversational machine comprehension requires deep understanding of the dialogue flow, and the prior work proposed FlowQA to implicitly model the context representations in reasoning for better understanding. This paper proposes to explicitly model the information gain through dialogue reasoning in order to allow the model to focus on more informative cues. The proposed model achieves state-of-the-art performance in a conversational QA dataset QuAC and sequential instruction understanding dataset SCONE, which shows the effectiveness of the proposed mechanism and demonstrates its capability of generalization to different QA models and tasks.

[1]  Geoffrey E. Hinton,et al.  Layer Normalization , 2016, ArXiv.

[2]  Percy Liang,et al.  From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood , 2017, ACL.

[3]  Eunsol Choi,et al.  QuAC: Question Answering in Context , 2018, EMNLP.

[4]  Quoc V. Le,et al.  QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.

[5]  Yoav Artzi,et al.  Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation , 2018, ACL.

[6]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[7]  Chenguang Zhu,et al.  SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering , 2018, ArXiv.

[8]  Iain Murray,et al.  BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning , 2019, ICML.

[9]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[10]  Mark Yatskar,et al.  A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC , 2018, NAACL.

[11]  Danqi Chen,et al.  CoQA: A Conversational Question Answering Challenge , 2018, TACL.

[12]  Percy Liang,et al.  Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.

[13]  W. Bruce Croft,et al.  Attentive History Selection for Conversational Question Answering , 2019, CIKM.

[14]  Dan Klein,et al.  Unified Pragmatic Models for Generating and Following Instructions , 2017, NAACL.

[15]  Jian Zhang,et al.  SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.

[16]  Eunsol Choi,et al.  CONVERSATIONAL MACHINE COMPREHENSION , 2019 .

[17]  Yelong Shen,et al.  FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension , 2017, ICLR.

[18]  Percy Liang,et al.  Simpler Context-Dependent Logical Forms via Model Projections , 2016, ACL.