Document Ranking with a Pretrained Sequence-to-Sequence Model
暂无分享,去创建一个
Jimmy J. Lin | Jimmy Lin | Zhiying Jiang | Rodrigo Nogueira | Rodrigo Nogueira | Ronak Pradeep | Zhiying Jiang
[1] Jamie Callan,et al. Deeper Text Understanding for IR with Contextual Neural Language Modeling , 2019, SIGIR.
[2] Kyunghyun Cho,et al. Passage Re-ranking with BERT , 2019, ArXiv.
[3] Canjia Li,et al. PARADE: Passage Representation Aggregation for Document Reranking , 2020, ArXiv.
[4] Alec Radford,et al. Scaling Laws for Neural Language Models , 2020, ArXiv.
[5] Yann Dauphin,et al. Hierarchical Neural Story Generation , 2018, ACL.
[6] Roy Schwartz,et al. Show Your Work: Improved Reporting of Experimental Results , 2019, EMNLP.
[7] Jimmy J. Lin,et al. Effectiveness/efficiency tradeoffs for candidate generation in multi-stage retrieval architectures , 2013, SIGIR.
[8] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[9] W. Bruce Croft,et al. A Deep Relevance Matching Model for Ad-hoc Retrieval , 2016, CIKM.
[10] Yao Zhao,et al. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization , 2020, ICML.
[11] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[12] Hang Li. Learning to Rank for Information Retrieval and Natural Language Processing , 2011, Synthesis Lectures on Human Language Technologies.
[13] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[14] Jimmy J. Lin,et al. Multi-Stage Document Ranking with BERT , 2019, ArXiv.
[15] Zhuyun Dai,et al. Rethinking Query Expansion for BERT Reranking , 2020, ECIR.
[16] Xu Tan,et al. MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.
[17] Ellen M. Voorhees,et al. Overview of the TREC 2004 Robust Track. , 2004 .
[18] Jimmy J. Lin,et al. Critically Examining the "Neural Hype": Weak Baselines and the Additivity of Effectiveness Gains from Neural Ranking Models , 2019, SIGIR.
[19] Bhaskar Mitra,et al. An Introduction to Neural Information Retrieval , 2018, Found. Trends Inf. Retr..
[20] Stephen E. Robertson,et al. Okapi at TREC-3 , 1994, TREC.
[21] Stephen E. Robertson,et al. GatfordCentre for Interactive Systems ResearchDepartment of Information , 1996 .
[22] Nick Craswell,et al. Learning to Match using Local and Distributed Representations of Text for Web Search , 2016, WWW.
[23] James Allan,et al. TREC 2017 Common Core Track Overview , 2017, TREC.
[24] Gerard de Melo,et al. Co-PACRR: A Context-Aware Neural IR Model for Ad-hoc Retrieval , 2017, WSDM.
[25] Zhiyuan Liu,et al. End-to-End Neural Ad-hoc Ranking with Kernel Pooling , 2017, SIGIR.
[26] Nazli Goharian,et al. CEDR: Contextualized Embeddings for Document Ranking , 2019, SIGIR.
[27] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[28] Xiaodong Liu,et al. Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.
[29] Bryan Catanzaro,et al. Zero-shot Text Classification With Generative Language Models , 2019, ArXiv.
[30] Tie-Yan Liu,et al. Learning to rank for information retrieval , 2009, SIGIR.
[31] Emine Yilmaz,et al. On the Reliability of Test Collections for Evaluating Systems of Different Types , 2020, SIGIR.
[32] Jimmy J. Lin,et al. Document Expansion by Query Prediction , 2019, ArXiv.
[33] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[34] Jimmy J. Lin,et al. Cross-Domain Modeling of Sentence-Level Evidence for Document Retrieval , 2019, EMNLP.
[35] Md. Mustafizur Rahman,et al. Neural information retrieval: at the end of the early years , 2017, Information Retrieval Journal.
[36] Jimmy J. Lin,et al. Anserini: Enabling the Use of Lucene for Information Retrieval Research , 2017, SIGIR.
[37] Taku Kudo,et al. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing , 2018, EMNLP.