SunBear at WNUT-2020 Task 2: Improving BERT-Based Noisy Text Classification with Knowledge of the Data domain
暂无分享,去创建一个
[1] Sebastian Ruder,et al. Universal Language Model Fine-tuning for Text Classification , 2018, ACL.
[2] Tomas Mikolov,et al. Advances in Pre-Training Distributed Word Representations , 2017, LREC.
[3] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[4] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[5] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[6] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[7] Quoc V. Le,et al. Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.
[8] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[10] Myle Ott,et al. Understanding Back-Translation at Scale , 2018, EMNLP.
[11] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[12] Dat Quoc Nguyen,et al. WNUT-2020 Task 2: Identification of Informative COVID-19 English Tweets , 2020, WNUT.