Deep Learning in Question Answering

Question answering (QA) is a challenging task in natural language processing. Recently, with the remarkable success of deep learning on many natural language processing tasks, including semantic and syntactic analysis, machine translation, relation extraction, etc., more and more efforts have also been devoted to the task of question answering. This chapter briefly introduces the recent advances in deep learning methods on two typical and popular question answering tasks. (1) Deep learning in question answering over knowledge base (KBQA) which mainly employs deep neural networks to understand the meaning of the questions and try to translate them into structured queries, or directly translate them into distributional semantic representations compared with candidate answers in the knowledge base. (2) Deep learning in machine comprehension (MC) which manages to construct an end-to-end paradigm based on novel neural networks for directly computing the deep semantic matching among question, answers and the given passage.

[1]  Ting Liu,et al.  Attention-over-Attention Neural Networks for Reading Comprehension , 2016, ACL.

[2]  Danqi Chen,et al.  A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task , 2016, ACL.

[3]  Jonathan Berant,et al.  Semantic Parsing via Paraphrasing , 2014, ACL.

[4]  Dongyan Zhao,et al.  Semantic Interpretation of Superlative Expressions via Structured Knowledge Bases , 2015, ACL.

[5]  Alexander Yates,et al.  Large-scale Semantic Parsing via Schema Matching and Lexicon Extension , 2013, ACL.

[6]  Andreas Vlachos,et al.  A Strong Lexical Matching Method for the Machine Comprehension Test , 2015, EMNLP.

[7]  Andrew Chou,et al.  Semantic Parsing on Freebase from Question-Answer Pairs , 2013, EMNLP.

[8]  Jason Weston,et al.  Key-Value Memory Networks for Directly Reading Documents , 2016, EMNLP.

[9]  Peter Jansen,et al.  Discourse Complements Lexical Semantics for Non-factoid Answer Reranking , 2014, ACL.

[10]  Akiko Aizawa,et al.  Prerequisite Skills for Reading Comprehension: Multi-Perspective Analysis of MCTest Datasets and Systems , 2017, AAAI.

[11]  Ming Zhou,et al.  Question Answering over Freebase with Multi-Column Convolutional Neural Networks , 2015, ACL.

[12]  Ye Yuan,et al.  Words or Characters? Fine-grained Gating for Reading Comprehension , 2016, ICLR.

[13]  Naoaki Okazaki,et al.  Dynamic Entity Representation with Max-pooling Improves Machine Reading , 2016, NAACL.

[14]  Jason Weston,et al.  End-To-End Memory Networks , 2015, NIPS.

[15]  Yelong Shen,et al.  ReasoNet: Learning to Stop Reading in Machine Comprehension , 2016, CoCo@NIPS.

[16]  Mark Steedman,et al.  Large-scale Semantic Parsing without Question-Answer Pairs , 2014, TACL.

[17]  Jason Weston,et al.  Question Answering with Subgraph Embeddings , 2014, EMNLP.

[18]  Christopher Meek,et al.  Semantic Parsing for Single-Relation Question Answering , 2014, ACL.

[19]  Ali Farhadi,et al.  Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.

[20]  Richard Socher,et al.  Dynamic Coattention Networks For Question Answering , 2016, ICLR.

[21]  Jun Zhao,et al.  Relation Classification via Convolutional Deep Neural Network , 2014, COLING.

[22]  Ming-Wei Chang,et al.  Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base , 2015, ACL.

[23]  Jason Weston,et al.  Open Question Answering with Weakly Supervised Embedding Models , 2014, ECML/PKDD.

[24]  Matthew Richardson,et al.  MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.

[25]  Hua Wu,et al.  An End-to-End Model for Question Answering over Knowledge Base with Cross-Attention Combining Global Knowledge , 2017, ACL.

[26]  Mark Steedman,et al.  The syntactic process , 2004, Language, speech, and communication.

[27]  Heng Ji,et al.  A Dependency-Based Neural Network for Relation Classification , 2015, ACL.

[28]  Wilson L. Taylor,et al.  “Cloze Procedure”: A New Tool for Measuring Readability , 1953 .

[29]  Ming-Wei Chang,et al.  The Value of Semantic Parse Labeling for Knowledge Base Question Answering , 2016, ACL.

[30]  Dongyan Zhao,et al.  Answering Natural Language Questions via Phrasal Semantic Parsing , 2014, CLEF.

[31]  Regina Barzilay,et al.  Machine Comprehension with Discourse Relations , 2015, ACL.

[32]  Zhiguo Wang,et al.  Multi-Perspective Context Matching for Machine Comprehension , 2016, ArXiv.

[33]  Ming Zhou,et al.  Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.

[34]  Eunsol Choi,et al.  Scaling Semantic Parsers with On-the-Fly Ontology Matching , 2013, EMNLP.

[35]  Oren Etzioni Search needs a shake-up , 2011, Nature.

[36]  Jason Weston,et al.  Memory Networks , 2014, ICLR.

[37]  Yang Yu,et al.  End-to-End Reading Comprehension with Dynamic Answer Chunk Ranking , 2016, ArXiv.

[38]  Eric P. Xing,et al.  Learning Answer-Entailing Structures for Machine Comprehension , 2015, ACL.

[39]  Chen Liang,et al.  Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision , 2016, ACL.

[40]  Jason Weston,et al.  Large-scale Simple Question Answering with Memory Networks , 2015, ArXiv.

[41]  Dongyan Zhao,et al.  Question Answering on Freebase via Relation Extraction and Textual Evidence , 2016, ACL.

[42]  Shuohang Wang,et al.  Machine Comprehension Using Match-LSTM and Answer Pointer , 2016, ICLR.

[43]  Jian Zhang,et al.  SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.

[44]  Yi Yang,et al.  S-MART: Novel Tree-based Structured Learning Algorithms Applied to Tweet Entity Linking , 2015, ACL.

[45]  Dongyan Zhao,et al.  Semantic Relation Classification via Convolutional Neural Networks with Simple Negative Sampling , 2015, EMNLP.

[46]  Phil Blunsom,et al.  Teaching Machines to Read and Comprehend , 2015, NIPS.