CNRL at SemEval-2020 Task 5: Modelling Causal Reasoning in Language with Multi-Head Self-Attention Weights Based Counterfactual Detection
暂无分享,去创建一个
[1] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[2] Sen Wang,et al. A Multi-level Neural Network for Implicit Causality Detection in Web Texts , 2019, Neurocomputing.
[3] Jesse Vig,et al. A Multiscale Visualization of Attention in the Transformer Model , 2019, ACL.
[4] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[5] Yong Fang,et al. Self Multi-Head Attention-based Convolutional Neural Networks for fake news detection , 2019, PloS one.
[6] Nabiha Asghar,et al. Automatic Extraction of Causal Relations from Natural Language Texts: A Comprehensive Survey , 2016, ArXiv.
[7] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[8] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.
[9] Stan Matwin,et al. SemEval-2020 Task 5: Counterfactual Recognition , 2020, SEMEVAL.
[10] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[11] Zornitsa Kozareva,et al. SemEval-2012 Task 7: Choice of Plausible Alternatives: An Evaluation of Commonsense Causal Reasoning , 2011, *SEMEVAL.
[12] Lipika Dey,et al. Automatic Extraction of Causal Relations from Text using Linguistically Informed Deep Neural Networks , 2018, SIGDIAL Conference.
[13] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[14] Fedor Moiseev,et al. Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned , 2019, ACL.
[15] Yejin Choi,et al. Social IQA: Commonsense Reasoning about Social Interactions , 2019, EMNLP 2019.