暂无分享,去创建一个
[1] Byron C. Wallace,et al. Attention is not Explanation , 2019, NAACL.
[2] Hakan Inan,et al. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling , 2016, ICLR.
[3] Manaal Faruqui,et al. Attention Interpretability Across NLP Tasks , 2019, ArXiv.
[4] Graham Neubig,et al. Learning to Deceive with Attention-Based Explanations , 2020, ACL.
[5] Olivier Bachem,et al. Assessing Generative Models via Precision and Recall , 2018, NeurIPS.
[6] Cynthia Rudin,et al. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.
[7] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[8] Yuval Pinter,et al. Attention is not not Explanation , 2019, EMNLP.
[9] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[10] Naftali Tishby,et al. The information bottleneck method , 2000, ArXiv.
[11] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[12] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[13] Maria Leonor Pacheco,et al. of the Association for Computational Linguistics: , 2001 .
[14] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[15] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[16] Rico Sennrich,et al. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives , 2019, EMNLP.
[17] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[18] Peter Szolovits,et al. MIMIC-III, a freely accessible critical care database , 2016, Scientific Data.
[19] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.