Automated Dependence Plots
暂无分享,去创建一个
[1] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[2] Gillian K. Hadfield,et al. Regulatory Markets for AI Safety , 2019, ArXiv.
[3] Franco Turini,et al. Local Rule-Based Explanations of Black Box Decision Systems , 2018, ArXiv.
[4] Iain Murray,et al. Masked Autoregressive Flow for Density Estimation , 2017, NIPS.
[5] Pradeep Ravikumar,et al. Representer Point Selection for Explaining Deep Neural Networks , 2018, NeurIPS.
[6] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[7] Chris Russell,et al. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.
[8] Barnabás Póczos,et al. Transformation Autoregressive Networks , 2018, ICML.
[9] Martin Wattenberg,et al. SmoothGrad: removing noise by adding noise , 2017, ArXiv.
[10] Yair Zick,et al. Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).
[11] Le Song,et al. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation , 2018, ICML.
[12] Pradeep Ravikumar,et al. Deep Density Destructors , 2018, ICML.
[13] Alexandra Chouldechova,et al. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments , 2016, Big Data.
[14] Suchi Saria,et al. Tutorial: Safe and Reliable Machine Learning , 2019, ArXiv.
[15] Michael J. Best,et al. Active set algorithms for isotonic regression; A unifying framework , 1990, Math. Program..
[16] Alexandra Chouldechova,et al. Fairer and more accurate, but for whom? , 2017, ArXiv.
[17] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[18] Li Fei-Fei,et al. Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.
[19] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[20] Rishabh Singh,et al. Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections , 2018, NeurIPS.
[21] Andrew Zisserman,et al. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.
[22] Johannes Stallkamp,et al. The German Traffic Sign Recognition Benchmark: A multi-class classification competition , 2011, The 2011 International Joint Conference on Neural Networks.
[23] Emil Pitkin,et al. Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation , 2013, 1309.6392.
[24] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[25] Amit Dhurandhar,et al. Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives , 2018, NeurIPS.
[26] Martin Wattenberg,et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV) , 2017, ICML.
[27] Hugo Larochelle,et al. MADE: Masked Autoencoder for Distribution Estimation , 2015, ICML.
[28] Martin Wattenberg,et al. The What-If Tool: Interactive Probing of Machine Learning Models , 2019, IEEE Transactions on Visualization and Computer Graphics.
[29] Scott Lundberg,et al. A Unified Approach to Interpreting Model Predictions , 2017, NIPS.
[30] Ankur Taly,et al. Axiomatic Attribution for Deep Networks , 2017, ICML.
[31] Kush R. Varshney,et al. On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products , 2016, Big Data.
[32] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[33] Wenbo Guo,et al. Explaining Deep Learning Models - A Bayesian Non-parametric Approach , 2018, NeurIPS.
[34] Avanti Shrikumar,et al. Learning Important Features Through Propagating Activation Differences , 2017, ICML.