暂无分享,去创建一个
Cecilia Testart | Julius Adebayo | Leilani H. Gilpin | Nathaniel Fruchter | Julius Adebayo | Nathaniel Fruchter | Cecilia Testart
[1] Cynthia Rudin,et al. This Looks Like That: Deep Learning for Interpretable Image Recognition , 2018 .
[2] R. Caruana,et al. Detecting Bias in Black-Box Models Using Transparent Model Distillation. , 2017 .
[3] Bolei Zhou,et al. Learning Deep Features for Discriminative Localization , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[5] Cynthia Rudin,et al. Algorithms for interpretable machine learning , 2014, KDD.
[6] Trevor Hastie,et al. Causal Interpretations of Black-Box Models , 2019, Journal of business & economic statistics : a publication of the American Statistical Association.
[7] David Weinberger,et al. Accountability of AI Under the Law: The Role of Explanation , 2017, ArXiv.
[8] Eneldo Loza Mencía,et al. DeepRED - Rule Extraction from Deep Neural Networks , 2016, DS.
[9] Nick Doty,et al. Privacy is an essentially contested concept: a multi-dimensional analytic for mapping privacy , 2016, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.
[10] Gilles Louppe,et al. Understanding variable importances in forests of randomized trees , 2013, NIPS.
[11] Geoffrey E. Hinton,et al. Dynamic Routing Between Capsules , 2017, NIPS.
[12] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[13] Cynthia Rudin,et al. The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification , 2014, NIPS.
[14] Xin Zhang,et al. End to End Learning for Self-Driving Cars , 2016, ArXiv.
[15] Stefan Carlsson,et al. CNN Features Off-the-Shelf: An Astounding Baseline for Recognition , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.
[16] Yoshua Bengio,et al. How transferable are features in deep neural networks? , 2014, NIPS.
[17] Bolei Zhou,et al. Network Dissection: Quantifying Interpretability of Deep Visual Representations , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[18] Lalana Kagal,et al. Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).
[19] Brandon M. Greenwell,et al. Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.
[20] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[21] Lalana Kagal,et al. J un 2 01 8 Explaining Explanations : An Approach to Evaluating Interpretability of Machine Learning , 2018 .
[22] Trevor Darrell,et al. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[23] Erik Strumbelj,et al. Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.
[24] Seth Flaxman,et al. European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..
[25] Deirdre K. Mulligan,et al. Saving Governance-By-Design , 2018 .
[26] Rishabh Singh,et al. Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections , 2018, NeurIPS.
[27] S. Landau. Control use of data to protect privacy , 2015, Science.
[28] J. Claybrook,et al. Autonomous vehicles: No driver…no regulation? , 2018, Science.
[29] Zhenlong Yuan,et al. Droid-Sec: deep learning in android malware detection , 2015, SIGCOMM 2015.
[30] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[31] D. Vladeck. Machines without Principals: Liability Rules and Artificial Intelligence , 2014 .
[32] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[33] Raymond Sheh,et al. Defining Explainable AI for Requirements Analysis , 2018, KI - Künstliche Intelligenz.
[34] Joachim Diederich,et al. Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..
[35] Martin Wattenberg,et al. TCAV: Relative concept importance testing with Linear Concept Activation Vectors , 2018 .