IIE-NLP-NUT at SemEval-2020 Task 4: Guiding PLM with Prompt Template Reconstruction Strategy for ComVE
暂无分享,去创建一个
Yue Hu | Luxi Xing | Wei Peng | Yuqiang Xie
[1] Ali Farhadi,et al. HellaSwag: Can a Machine Really Finish Your Sentence? , 2019, ACL.
[2] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[3] Hector J. Levesque,et al. The Winograd Schema Challenge , 2011, AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning.
[4] Alexander M. Rush,et al. Commonsense Knowledge Mining from Pretrained Models , 2019, EMNLP.
[5] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[6] Ting Liu,et al. Story Ending Prediction by Transferable BERT , 2019, IJCAI.
[7] Yejin Choi,et al. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference , 2018, EMNLP.
[8] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[9] Yue Zhang,et al. Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation , 2019, ACL.
[10] Erik T. Mueller,et al. Open Mind Common Sense: Knowledge Acquisition from the General Public , 2002, OTM.
[11] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[12] Quoc V. Le,et al. Do Language Models Have Common Sense , 2018 .
[13] Yue Zhang,et al. SemEval-2020 Task 4: Commonsense Validation and Explanation , 2020, SEMEVAL.
[14] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[15] Omer Levy,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[16] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.