Axiomatic Interpretability for Multiclass Additive Models
暂无分享,去创建一个
Rich Caruana | Xuezhou Zhang | Paul Koch | Sarah Tan | Urszula Chajewska | Yin Lou | R. Caruana | S. Tan | Yin Lou | Paul Koch | Urszula Chajewska | Xuezhou Zhang
[1] John T. Ormerod,et al. Penalized Wavelets: Embedding Wavelets into Semiparametric Regression , 2011 .
[2] Tommi S. Jaakkola,et al. Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.
[3] Albert Gordo,et al. Transparent Model Distillation , 2018, ArXiv.
[4] Margo I. Seltzer,et al. Scalable Bayesian Rule Lists , 2016, ICML.
[5] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[6] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[7] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[8] Paul H. C. Eilers,et al. Flexible smoothing with B-splines and penalties , 1996 .
[9] Cynthia Rudin,et al. Interpretable classification models for recidivism prediction , 2015, 1503.07810.
[10] Albert Gordo,et al. Learning Global Additive Explanations for Neural Nets Using Model Distillation , 2018 .
[11] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[12] P. Bühlmann,et al. Boosting With the L2 Loss , 2003 .
[13] Y. Freund,et al. Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By , 2000 .
[14] Joachim M. Buhmann,et al. The Balanced Accuracy and Its Posterior Distribution , 2010, 2010 20th International Conference on Pattern Recognition.
[15] Daniel Servén,et al. pyGAM: Generalized Additive Models in Python , 2018 .
[16] S. Wood. Generalized Additive Models: An Introduction with R , 2006 .
[17] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[18] Johannes Gehrke,et al. Accurate intelligible models with pairwise interactions , 2013, KDD.
[19] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[20] Gerhard Tutz,et al. A comparison of methods for the fitting of generalized additive models , 2008, Stat. Comput..
[21] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[22] Miroslav Dudík,et al. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? , 2018, CHI.
[23] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[24] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[25] Niklas Elmqvist,et al. Graphical Perception of Multiple Time Series , 2010, IEEE Transactions on Visualization and Computer Graphics.
[26] R. Tibshirani,et al. Generalized Additive Models , 1991 .
[27] P. Bühlmann,et al. Boosting with the L2-loss: regression and classification , 2001 .
[28] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[29] S. Wood. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models , 2011 .
[30] Rich Caruana,et al. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.
[31] Torsten Hothorn,et al. Model-based Boosting 2.0 , 2010, J. Mach. Learn. Res..
[32] R. Tibshirani. Adaptive piecewise polynomial estimation via trend filtering , 2013, 1304.2986.
[33] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.
[34] Andrew Slavin Ross,et al. Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations , 2017, IJCAI.