暂无分享,去创建一个
[1] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[2] Hui Xiong,et al. A Comprehensive Survey on Transfer Learning , 2021, Proceedings of the IEEE.
[3] Qiang Yang,et al. A Survey on Transfer Learning , 2010, IEEE Transactions on Knowledge and Data Engineering.
[4] Yixin Chen,et al. Deep Model Transferability from Attribution Maps , 2019, NeurIPS.
[5] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[6] Steve Hanneke,et al. On the Value of Target Data in Transfer Learning , 2020, NeurIPS.
[7] Fuzhen Zhuang,et al. Supervised Representation Learning: Transfer Learning with Deep Autoencoders , 2015, IJCAI.
[8] Rob Fergus,et al. Visualizing and Understanding Convolutional Networks , 2013, ECCV.
[9] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[10] Ali Farhadi,et al. You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Chandan Singh,et al. Interpretations are useful: penalizing explanations to align neural networks with prior knowledge , 2019, ICML.
[12] Matthieu Cord,et al. RUBi: Reducing Unimodal Biases in Visual Question Answering , 2019, NeurIPS.
[13] Pietro Perona,et al. One-shot learning of object categories , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[14] Mark Sandler,et al. MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[15] Dumitru Erhan,et al. A Benchmark for Interpretability Methods in Deep Neural Networks , 2018, NeurIPS.
[16] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[17] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[18] Jon Kleinberg,et al. Transfusion: Understanding Transfer Learning for Medical Imaging , 2019, NeurIPS.
[19] Ali Farhadi,et al. YOLOv3: An Incremental Improvement , 2018, ArXiv.
[20] Jon Howell,et al. Asirra: a CAPTCHA that exploits interest-aligned manual image categorization , 2007, CCS '07.
[21] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[22] Ramprasaath R. Selvaraju,et al. Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization , 2016 .
[23] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[24] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[25] Margaret Mitchell,et al. VQA: Visual Question Answering , 2015, International Journal of Computer Vision.
[26] Klaus-Robert Müller,et al. Explanations can be manipulated and geometry is to blame , 2019, NeurIPS.
[27] Gregory W. Wornell,et al. Learning New Tricks From Old Dogs: Multi-Source Transfer Learning From Pre-Trained Networks , 2019, NeurIPS.
[28] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[29] Zhe L. Lin,et al. Top-Down Neural Attention by Excitation Backprop , 2016, International Journal of Computer Vision.
[30] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[31] Klaus-Robert Müller,et al. Learning how to explain neural networks: PatternNet and PatternAttribution , 2017, ICLR.
[32] Hongxia Jin,et al. Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[33] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[34] Alexander Binder,et al. Explaining nonlinear classification decisions with deep Taylor decomposition , 2015, Pattern Recognit..
[35] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[36] Cynthia Rudin,et al. This Looks Like That: Deep Learning for Interpretable Image Recognition , 2018 .
[37] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[38] Bolei Zhou,et al. Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[40] Michael I. Jordan,et al. Advances in Neural Information Processing Systems 30 , 1995 .
[41] Ali Farhadi,et al. YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[43] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[44] David Duvenaud,et al. Explaining Image Classifiers by Counterfactual Generation , 2018, ICLR.
[45] Cengiz Öztireli,et al. Towards better understanding of gradient-based attribution methods for Deep Neural Networks , 2017, ICLR.