Improving performance of deep learning models with axiomatic attribution priors and expected gradients
暂无分享,去创建一个
Pascal Sturmfels | Gabriel Erion | Joseph D. Janizek | Su-In Lee | Scott M. Lundberg | Su-In Lee | J. Janizek | Pascal Sturmfels | G. Erion
[1] Kristian Kersting,et al. Making deep neural networks right for the right scientific reasons by interacting with their explanations , 2020, Nat. Mach. Intell..
[2] Hugh Chen,et al. From local explanations to global understanding with explainable AI for trees , 2020, Nature Machine Intelligence.
[3] Chandan Singh,et al. Interpretations are useful: penalizing explanations to align neural networks with prior knowledge , 2019, ICML.
[4] Joel Nothman,et al. SciPy 1.0-Fundamental Algorithms for Scientific Computing in Python , 2019, ArXiv.
[5] Frederick Liu,et al. Incorporating Priors with Feature Attribution on Text Classification , 2019, ACL.
[6] S. Jha,et al. Robust Attribution Regularization , 2019, NeurIPS.
[7] Gabriel Erion,et al. Explainable AI for Trees: From Local Explanations to Global Understanding , 2019, ArXiv.
[8] Aleksander Madry,et al. Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.
[9] G. Corrado,et al. Using a Deep Learning Algorithm and Integrated Gradients Explanation to Assist Grading for Diabetic Retinopathy. , 2019, Ophthalmology.
[10] Benjamin Recht,et al. Do ImageNet Classifiers Generalize to ImageNet? , 2019, ICML.
[11] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[12] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[13] Marcus A. Badgeley,et al. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study , 2018, PLoS medicine.
[14] Scott M. Lundberg,et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery , 2018, Nature Biomedical Engineering.
[15] Beth Wilmot,et al. Functional Genomic Landscape of Acute Myeloid Leukemia , 2018, Nature.
[16] Chenchen Liu,et al. Towards Robust Training of Neural Networks by Regularizing Adversarial Gradients , 2018, ArXiv.
[17] Sebastian Nowozin,et al. Adversarially Robust Training through Structured Gradient Regularization , 2018, ArXiv.
[18] Matt Fredrikson,et al. Supervising Feature Influence , 2018, ArXiv.
[19] Raja Giryes,et al. Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization , 2018, ECCV.
[20] Bin Yu,et al. Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs , 2018, ICLR.
[21] Andreas Bender,et al. DeepSynergy: predicting anti-cancer drug synergy with Deep Learning , 2017, Bioinform..
[22] Andrew Slavin Ross,et al. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients , 2017, AAAI.
[23] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[24] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[25] Andrea Vedaldi,et al. Interpretable Explanations of Black Boxes by Meaningful Perturbation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[26] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.
[27] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.
[28] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[29] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[30] Max Welling,et al. Semi-Supervised Classification with Graph Convolutional Networks , 2016, ICLR.
[31] Danilo Comminiello,et al. Group sparse regularization for deep neural networks , 2016, Neurocomputing.
[32] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[33] Qian Jiang,et al. Meis1 is critical to the maintenance of human acute myeloid leukemia cells independent of MLL rearrangements , 2017, Annals of Hematology.
[34] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[35] Yair Zick,et al. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).
[36] Alexander Binder,et al. Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers , 2016, ICANN.
[37] Weihong Deng,et al. Very deep convolutional neural network based image classification using small training sample size , 2015, 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR).
[38] Jack Xin,et al. A Weighted Difference of Anisotropic and Isotropic Total Variation Model for Image Processing , 2015, SIAM J. Imaging Sci..
[39] Daniel S. Himmelstein,et al. Understanding multicellular function and disease with human tissue-specific networks , 2015, Nature Genetics.
[40] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[41] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[42] Wei Cheng,et al. Graph-regularized dual Lasso for robust eQTL mapping , 2014, Bioinform..
[43] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[44] Erik Strumbelj,et al. Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.
[45] Qianshun Chang,et al. Efficient Algorithm for Isotropic and Anisotropic Total Variation Deblurring and Denoising , 2013, J. Appl. Math..
[46] Johnathan M. Bardsley,et al. Laplace-distributed increments, the Laplace prior, and edge-preserving regularization , 2012 .
[47] Ashraf A. Kassim,et al. Gini Index as Sparsity Measure for Signal Reconstruction from Compressive Samples , 2011, IEEE Journal of Selected Topics in Signal Processing.
[48] Gaël Varoquaux,et al. Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..
[49] Scott T. Rickard,et al. Comparing Measures of Sparsity , 2008, IEEE Transactions on Information Theory.
[50] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[51] B. Williams,et al. Mapping and quantifying mammalian transcriptomes by RNA-Seq , 2008, Nature Methods.
[52] John D. Storey,et al. Capturing Heterogeneity in Gene Expression Studies by Surrogate Variable Analysis , 2007, PLoS genetics.
[53] Pablo Tamayo,et al. Gene set enrichment analysis: A knowledge-based approach for interpreting genome-wide expression profiles , 2005, Proceedings of the National Academy of Sciences of the United States of America.
[54] Eric J. Friedman,et al. Paths and consistency in additive cost sharing , 2004, Int. J. Game Theory.
[55] R. Verhaak,et al. Prognostically useful gene-expression profiles in acute myeloid leukemia. , 2004, The New England journal of medicine.
[56] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[57] Y. Benjamini,et al. Controlling the false discovery rate: a practical and powerful approach to multiple testing , 1995 .
[58] Harris Drucker,et al. Improving generalization performance using double backpropagation , 1992, IEEE Trans. Neural Networks.