暂无分享,去创建一个
[1] Pushpak Bhattacharyya,et al. Reinforced Multi-task Approach for Multi-hop Question Generation , 2020, COLING.
[2] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[3] Richard Socher,et al. Efficient and Robust Question Answering from Minimal Context over Documents , 2018, ACL.
[4] Rich Caruana,et al. Multitask Learning , 1998, Encyclopedia of Machine Learning and Data Mining.
[5] Stefan Feuerriegel,et al. Adaptive Document Retrieval for Deep Question Answering , 2018, EMNLP.
[6] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[7] Ali Farhadi,et al. Phrase-Indexed Question Answering: A New Challenge for Scalable Document Comprehension , 2018, EMNLP.
[8] Yu Xu,et al. Learning to Generate Questions by LearningWhat not to Generate , 2019, WWW.
[9] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[10] Xinya Du,et al. Learning to Ask: Neural Question Generation for Reading Comprehension , 2017, ACL.
[11] John C. Nesbit,et al. Generating Natural Language Questions to Support Learning On-Line , 2013, ENLG.
[12] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[13] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[14] Noah A. Smith,et al. Automatic factual question generation from text , 2011 .
[15] Rajarshi Das,et al. Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading , 2018, ArXiv.
[16] Jason Weston,et al. Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.
[17] Xiaodong Liu,et al. Multi-task Learning with Sample Re-weighting for Machine Reading Comprehension , 2018, NAACL.
[18] Ruslan Salakhutdinov,et al. Semi-Supervised QA with Generative Domain-Adaptive Nets , 2017, ACL.
[19] Xinya Du,et al. Harvesting Paragraph-level Question-Answer Pairs from Wikipedia , 2018, ACL.
[20] Yao Zhao,et al. Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks , 2018, EMNLP.
[21] Xiaodong Liu,et al. Adversarial Domain Adaptation for Machine Reading Comprehension , 2019, EMNLP.
[22] Yoshua Bengio,et al. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering , 2018, EMNLP.
[23] Ming-Wei Chang,et al. Latent Retrieval for Weakly Supervised Open Domain Question Answering , 2019, ACL.
[24] Sadid A. Hasan,et al. Towards Automatic Topical Question Generation , 2012, COLING.
[25] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[26] Jianfeng Gao,et al. UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training , 2020, ICML.
[27] Rajarshi Das,et al. Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering , 2019, ICLR.
[28] Mohit Bansal,et al. Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering , 2019, EMNLP.
[29] Ramesh Nallapati,et al. Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering , 2019, EMNLP.
[30] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[31] Wei Zhang,et al. R3: Reinforced Ranker-Reader for Open-Domain Question Answering , 2018, AAAI.
[32] Yansong Feng,et al. Semantic Graphs for Generating Deep Questions , 2020, ACL.
[33] Ali Farhadi,et al. Real-Time Open-Domain Question Answering with Dense-Sparse Phrase Index , 2019, ACL.
[34] Yue Zhang,et al. Leveraging Context Information for Natural Question Generation , 2018, NAACL.
[35] Xiaodong Liu,et al. Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.
[36] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[37] Yoshua Bengio,et al. Generating Factoid Questions With Recurrent Neural Networks: The 30M Factoid Question-Answer Corpus , 2016, ACL.
[38] Jaewoo Kang,et al. Ranking Paragraphs for Improving Answer Recall in Open-Domain Question Answering , 2018, EMNLP.
[39] Daniel Jurafsky,et al. A Simple, Fast Diverse Decoding Algorithm for Neural Generation , 2016, ArXiv.
[40] Alon Lavie,et al. The Meteor metric for automatic evaluation of machine translation , 2009, Machine Translation.
[41] Hao Tian,et al. ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation , 2020, ArXiv.
[42] Jimmy J. Lin,et al. End-to-End Open-Domain Question Answering with BERTserini , 2019, NAACL.
[43] Mitesh M. Khapra,et al. Towards a Better Metric for Evaluating Question Generation Systems , 2018, EMNLP.
[44] Xin Wang,et al. No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling , 2018, ACL.