An Analysis of BERT in Document Ranking
暂无分享,去创建一个
Yiqun Liu | Jiaxin Mao | Shaoping Ma | Jingtao Zhan | Min Zhang | Min Zhang | Yiqun Liu | Shaoping Ma | Jingtao Zhan | Jiaxin Mao
[1] Anna Rumshisky,et al. Revealing the Dark Secrets of BERT , 2019, EMNLP.
[2] Jamie Callan,et al. Deeper Text Understanding for IR with Contextual Neural Language Modeling , 2019, SIGIR.
[3] Kawin Ethayarajh,et al. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings , 2019, EMNLP.
[4] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[5] Jimmy J. Lin,et al. Anserini: Enabling the Use of Lucene for Information Retrieval Research , 2017, SIGIR.
[6] Zhiyuan Liu,et al. Understanding the Behaviors of BERT in Ranking , 2019, ArXiv.
[7] W. Bruce Croft,et al. A Deep Look into Neural Ranking Models for Information Retrieval , 2019, Inf. Process. Manag..
[8] Kyunghyun Cho,et al. Passage Re-ranking with BERT , 2019, ArXiv.
[9] Yoshua Bengio,et al. Understanding intermediate layers using linear classifier probes , 2016, ICLR.
[10] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[11] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[12] Omer Levy,et al. What Does BERT Look at? An Analysis of BERT’s Attention , 2019, BlackboxNLP@ACL.
[13] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[14] Byron C. Wallace,et al. Attention is not Explanation , 2019, NAACL.