暂无分享,去创建一个
Yu Sun | Shikun Feng | Zhengjie Huang | Jiaxiang Liu | Xuan Ouyang | Xuyi Chen | Shuohuan Wang | Weiyue Su | Yu Sun | Shuohuan Wang | Shikun Feng | Xuyi Chen | Zhengjie Huang | Weiyue Su | Jiaxiang Liu | Xuan Ouyang
[1] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[2] Veselin Stoyanov,et al. Unsupervised Cross-lingual Representation Learning at Scale , 2019, ACL.
[3] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[4] Hao Tian,et al. ERNIE 2.0: A Continual Pre-training Framework for Language Understanding , 2019, AAAI.
[5] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[6] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[7] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[8] Yu Sun,et al. ERNIE: Enhanced Representation through Knowledge Integration , 2019, ArXiv.
[9] Franck Dernoncourt,et al. Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label Distributions , 2019, ACL.
[10] Franck Dernoncourt,et al. SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual Media , 2020, SEMEVAL.