B-PROP: Bootstrapped Pre-training with Representative Words Prediction for Ad-hoc Retrieval
暂无分享,去创建一个
[1] G P Shrivatsa Bhargav,et al. Span Selection Pre-training for Question Answering , 2019, ACL.
[2] ChengXiang Zhai,et al. Statistical Language Models for Information Retrieval: A Critical Review , 2008, Found. Trends Inf. Retr..
[3] Tomas Mikolov,et al. Advances in Pre-Training Distributed Word Representations , 2017, LREC.
[4] M. Gribaudo,et al. 2002 , 2001, Cell and Tissue Research.
[5] Charles L. A. Clarke,et al. Overview of the TREC 2004 Terabyte Track , 2004, TREC.
[6] Wei-Cheng Chang,et al. Pre-training Tasks for Embedding-based Large-scale Retrieval , 2020, ICLR.
[7] M. de Rijke,et al. Building simulated queries for known-item topics: an analysis using six european languages , 2007, SIGIR.
[8] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[9] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[10] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[11] Guillaume Lample,et al. Neural Architectures for Named Entity Recognition , 2016, NAACL.
[12] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[13] Bhaskar Mitra,et al. Overview of the TREC 2019 deep learning track , 2020, ArXiv.
[14] Jun Xu,et al. Modeling Diverse Relevance Patterns in Ad-hoc Retrieval , 2018, SIGIR.
[15] Kyunghyun Cho,et al. Passage Re-ranking with BERT , 2019, ArXiv.
[16] Jimmy J. Lin,et al. Document Expansion by Query Prediction , 2019, ArXiv.
[17] Zhiyuan Liu,et al. Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search , 2018, WSDM.
[18] Jimmy J. Lin,et al. Simple Applications of BERT for Ad Hoc Document Retrieval , 2019, ArXiv.
[19] Minlie Huang,et al. SentiLARE: Sentiment-Aware Language Representation Learning with Linguistic Knowledge , 2020, EMNLP.
[20] Ani Nenkova,et al. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , 2016, NAACL 2016.
[21] Jamie Callan,et al. Deeper Text Understanding for IR with Contextual Neural Language Modeling , 2019, SIGIR.
[22] Xueqi Cheng,et al. DeepRank: A New Deep Architecture for Relevance Ranking in Information Retrieval , 2017, CIKM.
[23] Ming-Wei Chang,et al. Latent Retrieval for Weakly Supervised Open Domain Question Answering , 2019, ACL.
[24] 蕭瓊瑞撰述,et al. 2009 , 2019, The Winning Cars of the Indianapolis 500.
[25] Hugo Zaragoza,et al. The Probabilistic Relevance Framework: BM25 and Beyond , 2009, Found. Trends Inf. Retr..
[26] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[27] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[28] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[29] Yao Zhao,et al. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , 2020, ICML.
[30] Avi Arampatzis,et al. A study of query length , 2008, SIGIR '08.
[31] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[32] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[33] Xiang Ji,et al. MatchZoo: A Learning, Practicing, and Developing System for Neural Text Matching , 2019, SIGIR.
[34] Tao Qin,et al. Introducing LETOR 4.0 Datasets , 2013, ArXiv.
[35] Jimmy J. Lin,et al. Document Ranking with a Pretrained Sequence-to-Sequence Model , 2020, FINDINGS.
[36] Jimmy J. Lin,et al. Pretrained Transformers for Text Ranking: BERT and Beyond , 2020, NAACL.
[37] Jiafeng Guo,et al. PROP: Pre-training with Representative Words Prediction for Ad-hoc Retrieval , 2020, ArXiv.
[38] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[39] ChengXiang Zhai,et al. Statistical Language Models for Information Retrieval , 2008, NAACL.
[40] C. J. van Rijsbergen,et al. Probabilistic models of information retrieval based on measuring the divergence from randomness , 2002, TOIS.
[41] Ellen M. Voorhees,et al. Overview of the TREC 2004 Robust Track. , 2004 .
[42] Jimmy J. Lin,et al. Anserini: Enabling the Use of Lucene for Information Retrieval Research , 2017, SIGIR.
[43] Nazli Goharian,et al. CEDR: Contextualized Embeddings for Document Ranking , 2019, SIGIR.
[44] W. Bruce Croft,et al. A Deep Relevance Matching Model for Ad-hoc Retrieval , 2016, CIKM.