TT-Net: Topic Transfer-Based Neural Network for Conversational Reading Comprehension

Conversational machine reading comprehension (MRC) is a new question answering task, which is more challenging compared to traditional single-turn MRC since it requires a better understanding of conversation history. In this paper, a novel neural network model for conversational reading comprehension, namely TT-Net, is proposed, which is capable of capturing topic transfer features using temporal convolutional network (TCN) in the dialog. The TT-Block packaged by the BiLSTM, TCN and Self-attention mechanism is presented to extract topic transfer features between questions. Our model is evaluated on the CoQA benchmark dataset compared with several baseline models including the strong baseline model named FlowQA. The results show that the model outperforms the baseline models: BiDAF++ by 7.6% and FlowQA by 0.7%, especially in children’s story domain our model promotes FlowQA’s performance by 3.9%, which indicates that the TT-Net contributes to a decent promotion for conversational reading comprehension.

[1]  Percy Liang,et al.  Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.

[2]  Danqi Chen,et al.  A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task , 2016, ACL.

[3]  Guokun Lai,et al.  Large-scale Cloze Test Dataset Created by Teachers , 2017, EMNLP.

[4]  Jonghyun Choi,et al.  Are You Smarter Than a Sixth Grader? Textbook Question Answering for Multimodal Machine Comprehension , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Phil Blunsom,et al.  Teaching Machines to Read and Comprehend , 2015, NIPS.

[6]  Jason Weston,et al.  Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.

[7]  Chenguang Zhu,et al.  SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering , 2018, ArXiv.

[8]  Philip Bachman,et al.  NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.

[9]  Jason Weston,et al.  The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations , 2015, ICLR.

[10]  Mohammed J. Zaki,et al.  GraphFlow: Exploiting Conversation Flow with Graph Neural Networks for Conversational Machine Comprehension , 2019, IJCAI.

[11]  Ali Farhadi,et al.  Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.

[12]  Ming Zhou,et al.  Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.

[13]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[14]  Oren Etzioni,et al.  Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge , 2018, ArXiv.

[15]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[16]  Quoc V. Le,et al.  QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.

[17]  Vladlen Koltun,et al.  An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling , 2018, ArXiv.

[18]  Hui Wang,et al.  R-Trans: RNN Transformer Network for Chinese Machine Reading Comprehension , 2019, IEEE Access.

[19]  Nelson F. Liu,et al.  Crowdsourcing Multiple Choice Science Questions , 2017, NUT@EMNLP.

[20]  Jian Zhang,et al.  SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.

[21]  Mitesh M. Khapra,et al.  Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph , 2018, AAAI.

[22]  Hung-Yu Kao,et al.  Probing Neural Network Comprehension of Natural Language Arguments , 2019, ACL.

[23]  Eunsol Choi,et al.  CONVERSATIONAL MACHINE COMPREHENSION , 2019 .

[24]  Eunsol Choi,et al.  QuAC: Question Answering in Context , 2018, EMNLP.

[25]  Siu Cheung Hui,et al.  Multi-range Reasoning for Machine Comprehension , 2018, ArXiv.

[26]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[27]  Mark Yatskar,et al.  A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC , 2018, NAACL.

[28]  Danqi Chen,et al.  CoQA: A Conversational Question Answering Challenge , 2018, TACL.

[29]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[30]  R. Thomas McCoy,et al.  Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference , 2019, ACL.

[31]  Chris Dyer,et al.  The NarrativeQA Reading Comprehension Challenge , 2017, TACL.