Learning to Learn to be Right for the Right Reasons
暂无分享,去创建一个
Kentaro Inui | Benjamin Heinzerling | Pride Kavumba | Ana Brassard | Kentaro Inui | Benjamin Heinzerling | Pride Kavumba | Ana Brassard
[1] Yue Zhang,et al. Does it Make Sense? And Why? A Pilot Study for Sense Making and Explanation , 2019, ACL.
[2] Victor S. Lempitsky,et al. Unsupervised Domain Adaptation by Backpropagation , 2014, ICML.
[3] Hung-Yu Kao,et al. Probing Neural Network Comprehension of Natural Language Arguments , 2019, ACL.
[4] Zornitsa Kozareva,et al. SemEval-2012 Task 7: Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning , 2011, *SEMEVAL.
[5] Yejin Choi,et al. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference , 2018, EMNLP.
[6] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[7] R. Thomas McCoy,et al. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference , 2019, ACL.
[8] Omer Levy,et al. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems , 2019, NeurIPS.
[9] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[10] Omer Levy,et al. Annotation Artifacts in Natural Language Inference Data , 2018, NAACL.
[11] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[12] Kentaro Inui,et al. What Makes Reading Comprehension Questions Easier? , 2018, EMNLP.
[13] Yonatan Belinkov,et al. On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference , 2019, *SEMEVAL.
[14] Joel Lehman,et al. Learning to Continually Learn , 2020, ECAI.
[15] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[16] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[17] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[18] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[19] Regina Barzilay,et al. Towards Debiasing Fact Verification Models , 2019, EMNLP.
[20] Kentaro Inui,et al. When Choosing Plausible Alternatives, Clever Hans can be Clever , 2019, EMNLP.
[21] Martha White,et al. Meta-Learning Representations for Continual Learning , 2019, NeurIPS.