Large Language Models are Built-in Autoregressive Search Engines
暂无分享,去创建一个
W. Yu | Meng Jiang | Zhihan Zhang | Zhihan Zhang | Noah Ziems | Noah Ziems
[1] W. Yu,et al. A Survey of Multi-task Learning in Natural Language Processing: Regarding Task Relatedness and Training Methods , 2022, EACL.
[2] R. Das,et al. When Not to Trust Language Models: Investigating Effectiveness and Limitations of Parametric and Non-Parametric Memories , 2022, ArXiv.
[3] Wayne Xin Zhao,et al. Dense Text Retrieval Based on Pretrained Language Models: A Survey , 2022, ACM Trans. Inf. Syst..
[4] Chenguang Zhu,et al. A Unified Encoder-Decoder Framework with Entity Memory , 2022, EMNLP.
[5] W. Yu,et al. Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question Answering , 2022, EMNLP.
[6] Keith B. Hall,et al. Promptagator: Few-shot Dense Retrieval From 8 Examples , 2022, ICLR.
[7] Dan Iter,et al. Generate rather than Retrieve: Large Language Models are Strong Context Generators , 2022, ICLR.
[8] Qi Zhang,et al. A Neural Corpus Indexer for Document Retrieval , 2022, NeurIPS.
[9] Wen-tau Yih,et al. Autoregressive Search Engines: Generating Substrings as Document Identifiers , 2022, NeurIPS.
[10] Ryan J. Lowe,et al. Training language models to follow instructions with human feedback , 2022, NeurIPS.
[11] William W. Cohen,et al. Transformer Memory as a Differentiable Search Index , 2022, NeurIPS.
[12] Edouard Grave,et al. Unsupervised Dense Information Retrieval with Contrastive Learning , 2021, Trans. Mach. Learn. Res..
[13] Shuohang Wang,et al. KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering , 2021, ACL.
[14] Zhiting Hu,et al. A Survey of Knowledge-enhanced Text Generation , 2020, ACM Comput. Surv..
[15] Eunsol Choi,et al. CREAK: A Dataset for Commonsense Reasoning over Entity Knowledge , 2021, NeurIPS Datasets and Benchmarks.
[16] Iryna Gurevych,et al. BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models , 2021, NeurIPS Datasets and Benchmarks.
[17] Soujanya Poria,et al. Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering , 2021, ArXiv.
[18] Hua Wu,et al. RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering , 2020, NAACL.
[19] Nicola De Cao,et al. Autoregressive Entity Retrieval , 2020, ICLR.
[20] Tao Qin,et al. Knowledge-Aware Procedural Text Understanding with Multi-Stage Training , 2020, WWW.
[21] Nicola De Cao,et al. KILT: a Benchmark for Knowledge Intensive Language Tasks , 2020, NAACL.
[22] Paul N. Bennett,et al. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval , 2020, ICLR.
[23] Christopher Potts,et al. Relevance-guided Supervision for OpenQA with ColBERT , 2020, Transactions of the Association for Computational Linguistics.
[24] Jianfeng Gao,et al. Reader-Guided Passage Reranking for Open-Domain Question Answering , 2021, FINDINGS.
[25] M. Zaharia,et al. ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT , 2020, SIGIR.
[26] Danqi Chen,et al. Dense Passage Retrieval for Open-Domain Question Answering , 2020, EMNLP.
[27] Ming-Wei Chang,et al. Natural Questions: A Benchmark for Question Answering Research , 2019, TACL.
[28] Jimmy J. Lin,et al. End-to-End Open-Domain Question Answering with BERTserini , 2019, NAACL.
[29] Eunsol Choi,et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.
[30] Jason Weston,et al. Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.
[31] Andrew Chou,et al. Semantic Parsing on Freebase from Question-Answer Pairs , 2013, EMNLP.
[32] Hugo Zaragoza,et al. The Probabilistic Relevance Framework: BM25 and Beyond , 2009, Found. Trends Inf. Retr..
[33] Sergey Brin,et al. The Anatomy of a Large-Scale Hypertextual Web Search Engine , 1998, Comput. Networks.