Faithful and Customizable Explanations of Black Box Models
暂无分享,去创建一个
Jure Leskovec | Rich Caruana | Himabindu Lakkaraju | Ece Kamar | J. Leskovec | R. Caruana | Ece Kamar | Himabindu Lakkaraju
[1] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[2] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[3] Lior Rokach,et al. Top-down induction of decision trees classifiers - a survey , 2005, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).
[4] Ivan Bratko,et al. Nomograms for visualizing support vector machines , 2005, KDD '05.
[5] Ramakrishnan Srikant,et al. Fast algorithms for mining association rules , 1998, VLDB 1998.
[6] Kwan-Liu Ma,et al. PaintingClass: interactive construction, visualization and exploration of decision trees , 2003, KDD '03.
[7] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[8] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[9] Hod Lipson,et al. Understanding Neural Networks Through Deep Visualization , 2015, ArXiv.
[10] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[11] Vahab S. Mirrokni,et al. Non-monotone submodular maximization under matroid and knapsack constraints , 2009, STOC '09.
[12] Max Welling,et al. Visualizing Deep Neural Network Decisions: Prediction Difference Analysis , 2017, ICLR.
[13] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[14] Anna Shcherbina,et al. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences , 2016, ArXiv.
[15] Samir Khuller,et al. The Budgeted Maximum Coverage Problem , 1999, Inf. Process. Lett..