Interpreting CNNs via Decision Trees
暂无分享,去创建一个
Quanshi Zhang | Yu Yang | Song-Chun Zhu | Ying Nian Wu | Song-Chun Zhu | Y. Wu | Quanshi Zhang | Yu Yang | Yu Yang
[1] Pietro Perona,et al. Strong supervision from weak annotation: Interactive training of deformable part models , 2011, 2011 International Conference on Computer Vision.
[2] Andrea Vedaldi,et al. Net2Vec: Quantifying and Explaining How Concepts are Encoded by Filters in Deep Neural Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[3] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[4] Quanshi Zhang,et al. Interpreting CNN knowledge via an Explanatory Graph , 2017, AAAI.
[5] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[6] Devi Parikh,et al. Do explanations make VQA models more predictable to a human? , 2018, EMNLP.
[7] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[8] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[9] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[10] Sanja Fidler,et al. Detect What You Can: Detecting and Representing Objects Using Holistic Models and Body Parts , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[11] Andrea Vedaldi,et al. Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Bolei Zhou,et al. Interpretable Basis Decomposition for Visual Explanation , 2018, ECCV.
[13] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[14] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[15] Geoffrey E. Hinton,et al. Dynamic Routing Between Capsules , 2017, NIPS.
[16] Geoffrey E. Hinton,et al. Distilling a Neural Network Into a Soft Decision Tree , 2017, CEx@AI*IA.
[17] Mathieu Aubry,et al. Understanding Deep Features with Computer-Generated Imagery , 2015, ICCV.
[18] Eric Horvitz,et al. Identifying Unknown Unknowns in the Open World: Representations and Policies for Guided Exploration , 2016, AAAI.
[19] Mike Wu,et al. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability , 2017, AAAI.
[20] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[21] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[23] Quanshi Zhang,et al. Unsupervised Learning of Neural Networks to Explain Neural Networks , 2018, ArXiv.
[24] Yan Liu,et al. Interpretable Deep Models for ICU Outcome Prediction , 2016, AMIA.
[25] Albert Gordo,et al. Transparent Model Distillation , 2018, ArXiv.
[26] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Quanshi Zhang,et al. Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning , 2016, AAAI.
[28] Quanshi Zhang,et al. Network Transplanting , 2018, ArXiv.
[29] Marcel Simon,et al. Neural Activation Constellations: Unsupervised Part Model Discovery with Convolutional Networks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[30] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[31] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[32] Joachim Denzler,et al. Part Detector Discovery in Deep Convolutional Neural Networks , 2014, ACCV.
[33] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[34] Quanshi Zhang,et al. Interpretable Convolutional Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[35] Thomas Brox,et al. Inverting Visual Representations with Convolutional Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[37] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[38] Quanshi Zhang,et al. Examining CNN representations with respect to Dataset Bias , 2017, AAAI.
[39] Bolei Zhou,et al. Object Detectors Emerge in Deep Scene CNNs , 2014, ICLR.
[40] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[41] Quanshi Zhang,et al. Visual interpretability for deep learning: a survey , 2018, Frontiers of Information Technology & Electronic Engineering.
[42] Pietro Perona,et al. The Caltech-UCSD Birds-200-2011 Dataset , 2011 .
[43] Natalie Wolchover,et al. New Theory Cracks Open the Black Box of Deep Learning , 2017 .
[44] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[45] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[46] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[47] Jie Chen,et al. Explainable Neural Networks based on Additive Index Models , 2018, ArXiv.