Deep Human Answer Understanding for Natural Reverse QA

This study focuses on a reverse question answering (QA) procedure, in which machines proactively raise questions and humans supply the answers. This procedure exists in many real human-machine interaction applications. However, a crucial problem in human-machine interaction is answer understanding. The existing solutions have relied on mandatory option term selection to avoid automatic answer understanding. However, these solutions have led to unnatural human-computer interaction and negatively affected user experience. To this end, the current study proposes a novel deep answer understanding network, called AntNet, for reverse QA. The network consists of three new modules, namely, skeleton attention for questions, relevance-aware representation of answers, and multi-hop based fusion. As answer understanding for reverse QA has not been explored, a new data corpus is compiled in this study. Experimental results indicate that our proposed network is significantly better than existing methods and those modified from classical natural language processing deep models. The effectiveness of the three new modules is also verified.

[1]  Yong Zhang,et al.  Concept and Attention-Based CNN for Question Retrieval in Multi-View Learning , 2018, ACM Trans. Intell. Syst. Technol..

[2]  Siu Cheung Hui,et al.  Cross Temporal Recurrent Networks for Ranking Question Answer Pairs , 2017, AAAI.

[3]  Furu Wei,et al.  Hierarchical Attention Flow for Multiple-Choice Reading Comprehension , 2018, AAAI.

[4]  Hal Daumé,et al.  Answer-based Adversarial Training for Generating Clarification Questions , 2019, NAACL.

[5]  N. Omar,et al.  A Hybrid method using Lexicon-based Approach and Naive Bayes Classifier for Arabic Opinion Question Answering , 2014, J. Comput. Sci..

[6]  Mengyang Li,et al.  Two-Level LSTM for Sentiment Analysis With Lexicon Embedding and Polar Flipping , 2020, IEEE Transactions on Cybernetics.

[7]  Hai Zhao,et al.  Dual Co-Matching Network for Multi-choice Reading Comprehension , 2020, AAAI.

[8]  Min-Ling Zhang,et al.  A Review on Multi-Label Learning Algorithms , 2014, IEEE Transactions on Knowledge and Data Engineering.

[9]  Vladan Devedzic,et al.  Textual Affect Communication and Evocation Using Abstract Generative Visuals , 2016, IEEE Transactions on Human-Machine Systems.

[10]  Guokun Lai,et al.  RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.

[11]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[12]  W. Bruce Croft,et al.  Retrieval models for question and answer archives , 2008, SIGIR '08.

[13]  Lidong Bing,et al.  Recurrent Attention Network on Memory for Aspect Sentiment Analysis , 2017, EMNLP.

[14]  Di Wang,et al.  A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering , 2015, ACL.

[15]  Xinya Du,et al.  Learning to Ask: Neural Question Generation for Reading Comprehension , 2017, ACL.

[16]  Peter Clark,et al.  Learning Knowledge Graphs for Question Answering through Conversational Dialog , 2015, NAACL.

[17]  Quoc V. Le,et al.  QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.

[18]  Clement T. Yu,et al.  On the construction of effective vocabularies for information retrieval , 1974 .

[19]  R. Mitkov,et al.  Computer-Aided Generation of Multiple-Choice Tests , 2003, International Conference on Natural Language Processing and Knowledge Engineering, 2003. Proceedings. 2003.

[20]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[21]  Yoshua Bengio,et al.  Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus , 2016, ACL.

[22]  Siu Cheung Hui,et al.  Learning to Rank Question Answer Pairs with Holographic Dual LSTM Architecture , 2017, SIGIR.

[23]  Richard Socher,et al.  Dynamic Coattention Networks For Question Answering , 2016, ICLR.

[24]  Zhiguo Wang,et al.  Bilateral Multi-Perspective Matching for Natural Language Sentences , 2017, IJCAI.

[25]  Yi Yang,et al.  WikiQA: A Challenge Dataset for Open-Domain Question Answering , 2015, EMNLP.

[26]  Noah A. Smith,et al.  Good Question! Statistical Ranking for Question Generation , 2010, NAACL.

[27]  Tanmoy Chakraborty,et al.  DiffQue , 2019, ACM Trans. Intell. Syst. Technol..

[28]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[29]  Bowen Zhou,et al.  Improved Representation Learning for Question Answer Matching , 2016, ACL.

[30]  Li Zhao,et al.  Attention-based LSTM for Aspect-level Sentiment Classification , 2016, EMNLP.

[31]  Peng Li,et al.  Option Comparison Network for Multiple-choice Reading Comprehension , 2019, ArXiv.

[32]  Xiaocheng Feng,et al.  Effective LSTMs for Target-Dependent Sentiment Classification , 2015, COLING.

[33]  Ming Zhou,et al.  Question Generation for Question Answering , 2017, EMNLP.

[34]  Xinyan Xiao,et al.  DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications , 2017, QA@ACL.

[35]  Qinghua Hu,et al.  Semi-interactive Attention Network for Answer Understanding in Reverse-QA , 2019, PAKDD.

[36]  Erik Cambria,et al.  Targeted Aspect-Based Sentiment Analysis via Embedding Commonsense Knowledge into an Attentive LSTM , 2018, AAAI.

[37]  Richard Socher,et al.  Ask Me Anything: Dynamic Memory Networks for Natural Language Processing , 2015, ICML.

[38]  Eduard H. Hovy,et al.  The Use of External Knowledge of Factoid QA , 2001, TREC.