Exploring Interpretability in Event Extraction: Multitask Learning of a Neural Event Classifier and an Explanation Decoder
暂无分享,去创建一个
[1] Mihai Surdeanu,et al. Odin’s Runes: A Rule Language for Information Extraction , 2016, LREC.
[2] Avrim Blum,et al. The Bottleneck , 2021, Monopsony Capitalism.
[3] Geoffrey E. Hinton,et al. Distilling a Neural Network Into a Soft Decision Tree , 2017, CEx@AI*IA.
[4] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[5] Yue Wang,et al. The Genia Event Extraction Shared Task, 2013 Edition - Overview , 2013, BioNLP@ACL.
[6] Clayton T. Morrison,et al. Large-scale automated machine reading discovers new cancer-driving mechanisms , 2018, Database J. Biol. Databases Curation.
[7] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[8] Yan Liu,et al. Interpretable Deep Models for ICU Outcome Prediction , 2016, AMIA.
[9] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[10] Mihai Surdeanu,et al. This before That: Causal Precedence in the Biomedical Domain , 2016, BioNLP@ACL.
[11] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[12] Xin Jiang,et al. Interpretable Charge Predictions for Criminal Cases: Learning to Generate Court Views from Fact Descriptions , 2018, NAACL.
[13] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[14] Trevor Darrell,et al. Generating Visual Explanations , 2016, ECCV.
[15] Thomas Lukasiewicz,et al. e-SNLI: Natural Language Inference with Natural Language Explanations , 2018, NeurIPS.
[16] Jude W. Shavlik,et al. in Advances in Neural Information Processing , 1996 .