暂无分享,去创建一个
Ji Ma | Jing Lu | Gustavo Hernández Ábrego | Yinfei Yang | Jianmo Ni | Gustavo Hernandez Abrego | Yinfei Yang | Jianmo Ni | Jing Lu | Ji Ma
[1] Jason Weston,et al. Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.
[2] Danqi Chen,et al. Dense Passage Retrieval for Open-Domain Question Answering , 2020, EMNLP.
[3] Ray Kurzweil,et al. Multilingual Universal Sentence Encoder for Semantic Retrieval , 2019, ACL.
[4] Matthew Henderson,et al. Efficient Natural Language Response Suggestion for Smart Reply , 2017, ArXiv.
[5] Rabab Kreidieh Ward,et al. Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[6] Jason Baldridge,et al. Learning Dense Representations for Entity Retrieval , 2019, CoNLL.
[7] Charles L. A. Clarke,et al. Reciprocal rank fusion outperforms condorcet and individual rank learning methods , 2009, SIGIR.
[8] Ion Androutsopoulos,et al. Deep Relevance Ranking Using Enhanced Document-Query Interactions , 2018, EMNLP.
[9] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[10] Noah Constant,et al. MultiReQA: A Cross-Domain Evaluation forRetrieval Question Answering Models , 2020, ADAPTNLP.
[11] Ye Li,et al. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval , 2020, ArXiv.
[12] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[13] Jeff Johnson,et al. Billion-Scale Similarity Search with GPUs , 2017, IEEE Transactions on Big Data.
[14] Ming-Wei Chang,et al. Natural Questions: A Benchmark for Question Answering Research , 2019, TACL.
[15] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[16] Jimmy J. Lin,et al. Anserini , 2018, Journal of Data and Information Quality.
[17] Kyunghyun Cho,et al. Passage Re-ranking with BERT , 2019, ArXiv.
[18] Noah Constant,et al. ReQA: An Evaluation for End-to-End Answer Retrieval Models , 2019, EMNLP.
[19] Sanjiv Kumar,et al. Accelerating Large-Scale Inference with Anisotropic Vector Quantization , 2020, ICML.
[20] Jacob Eisenstein,et al. Sparse, Dense, and Attentional Representations for Text Retrieval , 2020, Transactions of the Association for Computational Linguistics.
[21] Wei Liu,et al. Hashing with Graphs , 2011, ICML.
[22] W. Bruce Croft,et al. A Deep Relevance Matching Model for Ad-hoc Retrieval , 2016, CIKM.
[23] Wei-Cheng Chang,et al. Pre-training Tasks for Embedding-based Large-scale Retrieval , 2020, ICLR.
[24] Ji Ma,et al. Zero-shot Neural Retrieval via Domain-targeted Synthetic Query Generation , 2020, ArXiv.
[25] Bhaskar Mitra,et al. Overview of the TREC 2019 deep learning track , 2020, ArXiv.
[26] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[27] Daniel Gillick,et al. End-to-End Retrieval in Continuous Space , 2018, ArXiv.
[28] Keith Stevens,et al. Effective Parallel Corpus Mining using Bilingual Sentence Embeddings , 2018, WMT.
[29] Nazli Goharian,et al. CEDR: Contextualized Embeddings for Document Ranking , 2019, SIGIR.
[30] Ray Kurzweil,et al. Learning Semantic Textual Similarity from Conversations , 2018, Rep4NLP@ACL.
[31] Ming-Wei Chang,et al. Latent Retrieval for Weakly Supervised Open Domain Question Answering , 2019, ACL.