暂无分享,去创建一个
[1] S. Larson. The shrinkage of the coefficient of multiple correlation. , 1931 .
[2] J. Sherman,et al. Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix , 1950 .
[3] Pradeep Dubey,et al. Mathematical Properties of the Banzhaf Power Index , 1979, Math. Oper. Res..
[4] L. Shapley. A Value for n-person Games , 1988 .
[5] Peter L. Hammer,et al. Approximations of pseudo-Boolean functions; applications to game theory , 1992, ZOR Methods Model. Oper. Res..
[6] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[7] On an axiomatization of the Banzhaf value without the additivity axiom , 1997 .
[8] R. Dennis Cook,et al. Detection of Influential Observation in Linear Regression , 2000, Technometrics.
[9] Alon Lavie,et al. A Classifier-Based Parser with Linear Run-Time Complexity , 2005, IWPT.
[10] Stephen Clark,et al. Transition-Based Parsing of the Chinese Treebank using a Global Discriminative Model , 2009, IWPT.
[11] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[12] Erik Strumbelj,et al. An Efficient Explanation of Individual Classifications using Game Theory , 2010, J. Mach. Learn. Res..
[13] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[14] Joakim Nivre,et al. A Dynamic Oracle for Arc-Eager Dependency Parsing , 2012, COLING.
[15] Yue Zhang,et al. Fast and Accurate Shift-Reduce Constituent Parsing , 2013, ACL.
[16] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[17] I. Katsev. The Least Square Values for Games with Restricted Cooperation , 2013 .
[18] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[19] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[20] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[21] Alexander Binder,et al. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.
[22] Fei-Fei Li,et al. Visualizing and Understanding Recurrent Networks , 2015, ArXiv.
[23] Yoshua Bengio,et al. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention , 2015, ICML.
[24] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[25] Koray Kavukcuoglu,et al. Visual Attention , 2020, Computational Models for Cognitive Vision.
[26] Alex Graves,et al. DRAW: A Recurrent Neural Network For Image Generation , 2015, ICML.
[27] Wei Xu,et al. ABC-CNN: An Attention Based Convolutional Neural Network for Visual Question Answering , 2015, ArXiv.
[28] Kate Saenko,et al. Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering , 2015, ECCV.
[29] Alexander J. Smola,et al. Stacked Attention Networks for Image Question Answering , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Regina Barzilay,et al. Rationalizing Neural Predictions , 2016, EMNLP.
[31] Daniel Jurafsky,et al. Understanding Neural Networks through Representation Erasure , 2016, ArXiv.
[32] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[33] Yair Zick,et al. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).
[34] Xinlei Chen,et al. Visualizing and Understanding Neural Models in NLP , 2015, NAACL.
[35] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[36] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[37] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[38] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[39] Wesley De Neve,et al. Explaining Character-Aware Neural Networks for Word-Level Prediction: Do They Discover Linguistic Rules? , 2018, EMNLP.
[40] Le Song,et al. L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data , 2018, ICLR.
[41] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.