暂无分享,去创建一个
Weiming Zhang | Sheng Zhang | Xin Zhang | Hui Wang | Shanshan Liu | Hui Wang | Weiming Zhang | Sheng Zhang | Shanshan Liu | Xin Zhang
[2] Zhiguo Wang,et al. Multi-Perspective Context Matching for Machine Comprehension , 2016, ArXiv.
[3] Samuel R. Bowman,et al. Training a Ranking Function for Open-Domain Question Answering , 2018, NAACL.
[4] Simon Ostermann,et al. MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge , 2018, LREC.
[5] Mitesh M. Khapra,et al. DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension , 2018, ACL.
[6] Ming Zhou,et al. Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.
[7] Furu Wei,et al. Read + Verify: Machine Reading Comprehension with Unanswerable Questions , 2018, AAAI.
[8] Jaewoo Kang,et al. Ranking Paragraphs for Improving Answer Recall in Open-Domain Question Answering , 2018, EMNLP.
[9] Guokun Lai,et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.
[10] Matthew Richardson,et al. MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text , 2013, EMNLP.
[11] Tom M. Mitchell,et al. Leveraging Knowledge Bases in LSTMs for Improving Machine Reading , 2017, ACL.
[12] Rudolf Kadlec,et al. Text Understanding with the Attention Sum Reader Network , 2016, ACL.
[13] Chris Dyer,et al. The NarrativeQA Reading Comprehension Challenge , 2017, TACL.
[14] Rudolf Kadlec,et al. Embracing data abundance: BookTest Dataset for Reading Comprehension , 2016, ICLR.
[15] Philip Bachman,et al. Natural Language Comprehension with the EpiReader , 2016, EMNLP.
[16] Ye Yuan,et al. Words or Characters? Fine-grained Gating for Reading Comprehension , 2016, ICLR.
[17] Yang Liu,et al. U-Net: Machine Reading Comprehension with Unanswerable Questions , 2018, ArXiv.
[18] Zhiyuan Liu,et al. Denoising Distantly Supervised Open-Domain Question Answering , 2018, ACL.
[19] Ruslan Salakhutdinov,et al. A Comparative Study of Word Embeddings for Reading Comprehension , 2017, ArXiv.
[20] Christopher Clark,et al. Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.
[21] Bowen Zhou,et al. End-to-End Answer Chunk Extraction and Ranking for Reading Comprehension , 2016, 1610.09996.
[22] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[23] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[24] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[25] Sandro Pezzelle,et al. The LAMBADA dataset: Word prediction requiring a broad discourse context , 2016, ACL.
[26] Jason Weston,et al. The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations , 2015, ICLR.
[27] Nan Yang,et al. I Know There Is No Answer: Modeling Answer Validation for Machine Reading Comprehension , 2018, NLPCC.
[28] Shuohang Wang,et al. Machine Comprehension Using Match-LSTM and Answer Pointer , 2016, ICLR.
[29] Seunghak Yu,et al. A Multi-Stage Memory Augmented Neural Network for Machine Reading Comprehension , 2018, QA@ACL.
[30] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[31] Ming Zhou,et al. S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension , 2017, AAAI 2017.
[32] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[33] Richard Socher,et al. Dynamic Coattention Networks For Question Answering , 2016, ICLR.
[34] Jungang Xu,et al. A Survey on Neural Machine Reading Comprehension , 2019, ArXiv.
[35] Jason Weston,et al. End-To-End Memory Networks , 2015, NIPS.
[36] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[37] Hui Wang,et al. R-Trans: RNN Transformer Network for Chinese Machine Reading Comprehension , 2019, IEEE Access.
[38] Zachary C. Lipton,et al. How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks , 2018, EMNLP.
[39] Philip Bachman,et al. Iterative Alternating Neural Attention for Machine Reading , 2016, ArXiv.
[40] Kyunghyun Cho,et al. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.
[41] Eunsol Choi,et al. CONVERSATIONAL MACHINE COMPREHENSION , 2019 .
[42] Xinyan Xiao,et al. DuReader: a Chinese Machine Reading Comprehension Dataset from Real-world Applications , 2017, QA@ACL.
[43] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[44] Todor Mihaylov,et al. Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with External Commonsense Knowledge , 2018, ACL.
[45] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[46] Mark Yatskar,et al. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC , 2018, NAACL.
[47] Danqi Chen,et al. CoQA: A Conversational Question Answering Challenge , 2018, TACL.
[48] Eunsol Choi,et al. QuAC: Question Answering in Context , 2018, EMNLP.
[49] Preslav Nakov,et al. SemEval-2016 Task 4: Sentiment Analysis in Twitter. , 2019 .
[50] Omer Levy,et al. Zero-Shot Relation Extraction via Reading Comprehension , 2017, CoNLL.
[51] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[52] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[53] Jason Weston,et al. Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.
[54] Ruslan Salakhutdinov,et al. Gated-Attention Readers for Text Comprehension , 2016, ACL.
[55] Richard Socher,et al. Learned in Translation: Contextualized Word Vectors , 2017, NIPS.
[56] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[57] Walter Daelemans,et al. CliCR: a Dataset of Clinical Case Reports for Machine Reading Comprehension , 2018, NAACL.
[58] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[59] Ming Zhou,et al. Reinforced Mnemonic Reader for Machine Reading Comprehension , 2017, IJCAI.
[60] Rajarshi Das,et al. Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering , 2019, ICLR.
[61] Utpal Garain,et al. CNN for Text-Based Multiple Choice Question Answering , 2018, ACL.
[62] Li-Rong Dai,et al. Exploring Question Understanding and Adaptation in Neural-Network-Based Question Answering , 2017, ArXiv.
[63] Furu Wei,et al. Hierarchical Attention Flow for Multiple-Choice Reading Comprehension , 2018, AAAI.
[64] Percy Liang,et al. Adversarial Examples for Evaluating Reading Comprehension Systems , 2017, EMNLP.
[65] William W. Cohen,et al. Quasar: Datasets for Question Answering by Search and Reading , 2017, ArXiv.
[66] Yelong Shen,et al. ReasoNet: Learning to Stop Reading in Machine Comprehension , 2016, CoCo@NIPS.
[67] Xiaocheng Feng,et al. Knowledge Based Machine Reading Comprehension , 2018, ArXiv.
[68] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[69] Guokun Lai,et al. Large-scale Cloze Test Dataset Designed by Teachers , 2018, ArXiv.
[70] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[71] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[72] Chenguang Zhu,et al. SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering , 2018, ArXiv.
[73] Jackie Chi Kit Cheung,et al. World Knowledge for Reading Comprehension: Rare Entity Prediction with Hierarchical LSTMs Using External Descriptions , 2017, EMNLP.
[74] Richard Socher,et al. DCN+: Mixed Objective and Deep Residual Coattention for Question Answering , 2017, ICLR.
[75] Jason Weston,et al. A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.
[76] Wentao Ma,et al. Convolutional Spatial Attention Model for Reading Comprehension with Multiple-Choice Questions , 2018, AAAI.
[77] Jason Weston,et al. Key-Value Memory Networks for Directly Reading Documents , 2016, EMNLP.
[78] Deng Cai,et al. MEMEN: Multi-layer Embedding with Memory Networks for Machine Comprehension , 2017, ArXiv.
[79] Li Zhao,et al. Attention-based LSTM for Aspect-level Sentiment Classification , 2016, EMNLP.
[80] David A. McAllester,et al. Who did What: A Large-Scale Person-Centered Cloze Dataset , 2016, EMNLP.
[81] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[82] Deng Cai,et al. Smarnet: Teaching Machines to Read and Comprehend Like Human , 2017, ArXiv.
[83] Ting Liu,et al. Attention-over-Attention Neural Networks for Reading Comprehension , 2016, ACL.
[84] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[85] Eunsol Choi,et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.
[86] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[87] Danqi Chen,et al. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task , 2016, ACL.
[88] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[89] Jun Xu,et al. HAS-QA: Hierarchical Answer Spans Model for Open-domain Question Answering , 2019, AAAI.
[90] Jinho D. Choi,et al. Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog , 2018, NAACL.
[91] Hui Jiang,et al. Exploring Machine Reading Comprehension with Explicit Knowledge , 2018, ArXiv.
[92] Jason Weston,et al. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks , 2015, ICLR.
[93] Quoc V. Le,et al. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.
[94] Richard Socher,et al. Efficient and Robust Question Answering from Minimal Context over Documents , 2018, ACL.
[95] Jianfeng Gao,et al. Bi-directional Attention with Agreement for Dependency Parsing , 2016, EMNLP.
[96] Ting Liu,et al. Consensus Attention-based Neural Networks for Chinese Reading Comprehension , 2016, COLING.
[97] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.