暂无分享,去创建一个
Fabio Petroni | Wen-tau Yih | Douwe Kiela | Mike Lewis | Ethan Perez | Sebastian Riedel | Tim Rocktaschel | Naman Goyal | Patrick Lewis | Heinrich Kuttler | Aleksandara Piktus | Vladimir Karpukhin | Naman Goyal | M. Lewis | Tim Rocktäschel | Douwe Kiela | Wen-tau Yih | Ethan Perez | Fabio Petroni | Patrick Lewis | Sebastian Riedel | Heinrich Kuttler | Vladimir Karpukhin | Aleksandara Piktus
[1] Eunsol Choi,et al. Entities as Experts: Sparse Memory Access with Entity Supervision , 2020, EMNLP.
[2] Fabio Petroni,et al. How Decoding Strategies Affect the Verifiability of Generated Text , 2019, FINDINGS.
[3] Jason Weston,et al. Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.
[4] Mitesh M. Khapra,et al. Towards Exploiting Background Knowledge for Building Conversation Systems , 2018, EMNLP.
[5] Ming-Wei Chang,et al. Natural Questions: A Benchmark for Question Answering Research , 2019, TACL.
[6] Ashwin K. Vijayakumar,et al. Diverse Beam Search for Improved Description of Complex Scenes , 2018, AAAI.
[7] Chenliang Li,et al. PALM: Pre-training an Autoencoding&autoregressive Language Model for Context-conditioned Generation , 2020, EMNLP.
[8] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[9] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[10] Ming-Wei Chang,et al. A Knowledge-Grounded Neural Conversation Model , 2017, AAAI.
[11] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[12] Jeff Johnson,et al. Billion-Scale Similarity Search with GPUs , 2017, IEEE Transactions on Big Data.
[13] Hao Wu,et al. Mixed Precision Training , 2017, ICLR.
[14] Nabil Hossain,et al. Simple and Effective Retrieve-Edit-Rerank Text Generation , 2020, ACL.
[15] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[16] Jason Weston,et al. ACUTE-EVAL: Improved Dialogue Evaluation with Optimized Questions and Multi-turn Comparisons , 2019, ArXiv.
[17] Kyunghyun Cho,et al. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.
[18] Christopher Clark,et al. Simple and Effective Multi-Paragraph Reading Comprehension , 2017, ACL.
[19] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[20] Hugo Zaragoza,et al. The Probabilistic Relevance Framework: BM25 and Beyond , 2009, Found. Trends Inf. Retr..
[21] Jason Weston,et al. End-To-End Memory Networks , 2015, NIPS.
[22] Eunsol Choi,et al. Coarse-to-Fine Question Answering for Long Documents , 2016, ACL.
[23] Jianfeng Gao,et al. A Diversity-Promoting Objective Function for Neural Conversation Models , 2015, NAACL.
[24] Yury A. Malkov,et al. Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[25] John Salvatier,et al. When Will AI Exceed Human Performance? Evidence from AI Experts , 2017, ArXiv.
[26] Jason Weston,et al. Memory Networks , 2014, ICLR.
[27] Lukasz Kaiser,et al. Generating Wikipedia by Summarizing Long Sequences , 2018, ICLR.
[28] James Thorne,et al. Avoiding catastrophic forgetting in mitigating model biases in sentence-pair classification with elastic weight consolidation , 2020, ArXiv.
[29] Danqi Chen,et al. Dense Passage Retrieval for Open-Domain Question Answering , 2020, EMNLP.
[30] Petr Baudis,et al. Modeling of the Question Answering Task in the YodaQA System , 2015, CLEF.
[31] Colin Raffel,et al. How Much Knowledge Can You Pack into the Parameters of a Language Model? , 2020, EMNLP.
[32] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[33] M. Zhou,et al. Reasoning Over Semantic-Level Graph for Fact Checking , 2019, ACL.
[34] Kyunghyun Cho,et al. Passage Re-ranking with BERT , 2019, ArXiv.
[35] Jason Weston,et al. Finding Generalizable Evidence by Learning to Convince Q&A Models , 2019, EMNLP.
[36] Wei Zhang,et al. Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering , 2017, ICLR.
[37] Mohit Bansal,et al. Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering , 2019, EMNLP.
[38] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[39] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[40] Claire Gardent,et al. Augmenting Transformers with KNN-Based Composite Memory , 2019 .
[41] Gary Marcus,et al. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence , 2020, ArXiv.
[42] Jason Weston,et al. Wizard of Wikipedia: Knowledge-Powered Conversational agents , 2018, ICLR.
[43] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[44] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[45] Ming-Wei Chang,et al. Latent Retrieval for Weakly Supervised Open Domain Question Answering , 2019, ACL.
[46] Andreas Vlachos,et al. FEVER: a Large-scale Dataset for Fact Extraction and VERification , 2018, NAACL.
[47] Andrew Chou,et al. Semantic Parsing on Freebase from Question-Answer Pairs , 2013, EMNLP.
[48] Omer Levy,et al. Generalization through Memorization: Nearest Neighbor Language Models , 2019, International Conference on Learning Representations.
[49] Guillaume Lample,et al. Large Memory Layers with Product Keys , 2019, NeurIPS.
[50] Fabio Petroni,et al. How Context Affects Language Models' Factual Predictions , 2020, AKBC.
[51] Alec Radford,et al. Release Strategies and the Social Impacts of Language Models , 2019, ArXiv.
[52] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[53] Sebastian Riedel,et al. Language Models as Knowledge Bases? , 2019, EMNLP.
[54] Yong Wang,et al. Search Engine Guided Neural Machine Translation , 2018, AAAI.
[55] Tomas Mikolov,et al. Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets , 2015, NIPS.
[56] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[57] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[58] Jason Weston,et al. Retrieve and Refine: Improved Sequence Generation Models For Dialogue , 2018, SCAI@EMNLP.
[59] Jason Weston,et al. ELI5: Long Form Question Answering , 2019, ACL.
[60] Eunsol Choi,et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.
[61] Percy Liang,et al. Generating Sentences by Editing Prototypes , 2017, TACL.
[62] Zhongjun He,et al. Robust Neural Machine Translation with Joint Textual and Phonetic Embedding , 2018, ACL.
[63] Percy Liang,et al. A Retrieve-and-Edit Framework for Predicting Structured Outputs , 2018, NeurIPS.
[64] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[65] Mitesh M. Khapra,et al. Towards a Better Metric for Evaluating Question Generation Systems , 2018, EMNLP.