暂无分享,去创建一个
Quanshi Zhang | Song-Chun Zhu | Robert C. Qiu | Yu Yang | Zenan Ling | Haotian Ma | Song-Chun Zhu | R. Qiu | Quanshi Zhang | Yu Yang | Zenan Ling | Haotian Ma
[1] Yi Liu,et al. Teaching Compositionality to CNNs , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Quanshi Zhang,et al. Interpretable Convolutional Neural Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[3] Thomas Brox,et al. Inverting Visual Representations with Convolutional Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Naftali Tishby,et al. Opening the Black Box of Deep Neural Networks via Information , 2017, ArXiv.
[5] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[6] Zhe L. Lin,et al. Top-Down Neural Attention by Excitation Backprop , 2016, International Journal of Computer Vision.
[7] Quanshi Zhang,et al. Unsupervised Learning of Neural Networks to Explain Neural Networks , 2018, ArXiv.
[8] Avanti Shrikumar,et al. Not Just A Black Box : Interpretable Deep Learning by Propagating Activation Differences , 2016 .
[9] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[10] Hang Su,et al. Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples , 2017, ArXiv.
[11] Demis Hassabis,et al. Mastering the game of Go with deep neural networks and tree search , 2016, Nature.
[12] Bolei Zhou,et al. Object Detectors Emerge in Deep Scene CNNs , 2014, ICLR.
[13] Yuandong Tian,et al. ELF: An Extensive, Lightweight and Flexible Research Platform for Real-time Strategy Games , 2017, NIPS.
[14] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[15] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[16] Geoffrey E. Hinton,et al. Dynamic Routing Between Capsules , 2017, NIPS.
[17] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[18] Marcel Simon,et al. Neural Activation Constellations: Unsupervised Part Model Discovery with Convolutional Networks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[19] Alexander Wong,et al. Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[20] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[21] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Natalie Wolchover,et al. New Theory Cracks Open the Black Box of Deep Learning , 2017 .
[23] Eric P. Xing,et al. Harnessing Deep Neural Networks with Logic Rules , 2016, ACL.
[24] Andrea Vedaldi,et al. Understanding deep image representations by inverting them , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[26] Jie Chen,et al. Explainable Neural Networks based on Additive Index Models , 2018, ArXiv.
[27] Bolei Zhou,et al. Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[28] Renjie Liao,et al. Learning Deep Parsimonious Representations , 2016, NIPS.
[29] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[30] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[31] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[32] Alexei A. Efros,et al. Unsupervised Discovery of Mid-Level Discriminative Patches , 2012, ECCV.
[33] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[34] Wei Sun,et al. Interpretable R-CNN , 2017, ArXiv.