暂无分享,去创建一个
Alice H. Oh | Alice Oh | Jung-Woo Ha | Yeon Seonwoo | Ji-Hoon Kim | Jung-Woo Ha | Ji-Hoon Kim | Yeon Seonwoo
[1] Eunsol Choi,et al. MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension , 2019, MRQA@EMNLP.
[2] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[3] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[4] Wei Zhang,et al. Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering , 2017, ICLR.
[5] Hannaneh Hajishirzi,et al. Multi-hop Reading Comprehension through Question Decomposition and Rescoring , 2019, ACL.
[6] Jian Su,et al. Densely Connected Attention Propagation for Reading Comprehension , 2018, NeurIPS.
[7] Richard Socher,et al. Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering , 2019, ICLR.
[8] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[9] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.
[10] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[11] Ming-Wei Chang,et al. Latent Retrieval for Weakly Supervised Open Domain Question Answering , 2019, ACL.
[12] Danqi Chen,et al. A Discrete Hard EM Approach for Weakly Supervised Question Answering , 2019, EMNLP.
[13] Ankur P. Parikh,et al. Multi-Mention Learning for Reading Comprehension with Neural Cascades , 2017, ICLR.
[14] Eunsol Choi,et al. Coarse-to-Fine Question Answering for Long Documents , 2016, ACL.
[15] Danqi Chen,et al. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task , 2016, ACL.
[16] Ming-Wei Chang,et al. Natural Questions: A Benchmark for Question Answering Research , 2019, TACL.
[17] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[18] Rajarshi Das,et al. Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering , 2019, ICLR.
[19] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[20] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[21] Jason Weston,et al. Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.
[22] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[23] Wei Li,et al. A Unified Model for Document-Based Question Answering Based on Human-Like Reading Strategy , 2018, AAAI.
[24] Minlie Huang,et al. A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction , 2020, ACL.