Sight-Seeing in the Eyes of Deep Neural Networks
暂无分享,去创建一个
Viktor de Boer | Ronald Siebes | Seyran Khademi | Carola Hein | Xiangwei Shi | Tino Mager | Jan C. van Gemert | J. V. Gemert | V. D. Boer | C. Hein | R. Siebes | Tino Mager | Seyran Khademi | Xiangwei Shi
[1] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[2] Davide Modolo,et al. Do Semantic Parts Emerge in Convolutional Neural Networks? , 2016, International Journal of Computer Vision.
[3] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Sameer Singh,et al. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier , 2016, NAACL.
[5] Alexei A. Efros,et al. What makes Paris look like Paris? , 2015, Commun. ACM.
[6] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[7] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[9] Josef Sivic,et al. NetVLAD: CNN Architecture for Weakly Supervised Place Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[11] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[12] Bolei Zhou,et al. Object Detectors Emerge in Deep Scene CNNs , 2014, ICLR.
[13] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).