An ALBERT-based Similarity Measure for Mathematical Answer Retrieval
暂无分享,去创建一个
[1] M. Zaharia,et al. ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT , 2020, SIGIR.
[2] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[3] Jian Wu,et al. PSU at CLEF-2020 ARQMath Track: Unsupervised Re-ranking using Pretraining , 2020, CLEF.
[4] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[5] Aditya Kanade,et al. Learning and Evaluating Contextual Embedding of Source Code , 2019, ICML.
[6] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[7] Iz Beltagy,et al. SciBERT: A Pretrained Language Model for Scientific Text , 2019, EMNLP.
[8] ChengXiang Zhai,et al. Lower-bounding term frequency normalization , 2011, CIKM '11.
[9] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[10] James Demmel,et al. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes , 2019, ICLR.
[11] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[12] George Labahn,et al. Dowsing for Math Answers with Tangent-L , 2020, CLEF.
[13] Xiaocheng Feng,et al. CodeBERT: A Pre-Trained Model for Programming and Natural Languages , 2020, EMNLP.
[14] Douglas W. Oard,et al. Finding Old Answers to New Math Questions: The ARQMath Lab at CLEF 2020 , 2020, ECIR.