暂无分享,去创建一个
Rich Caruana | Xuezhou Zhang | Paul Koch | Sarah Tan | Urszula Chajewska | Yin Lou | R. Caruana | S. Tan | Yin Lou | Paul Koch | Urszula Chajewska | Xuezhou Zhang
[1] Tommi S. Jaakkola,et al. Towards Robust Interpretability with Self-Explaining Neural Networks , 2018, NeurIPS.
[2] Albert Gordo,et al. Transparent Model Distillation , 2018, ArXiv.
[3] Margo I. Seltzer,et al. Scalable Bayesian Rule Lists , 2016, ICML.
[4] Paul H. C. Eilers,et al. Flexible smoothing with B-splines and penalties , 1996 .
[5] Carlos Guestrin,et al. Anchors: High-Precision Model-Agnostic Explanations , 2018, AAAI.
[6] Cynthia Rudin,et al. Interpretable classification models for recidivism prediction , 2015, 1503.07810.
[7] Johannes Gehrke,et al. Accurate intelligible models with pairwise interactions , 2013, KDD.
[8] Johannes Gehrke,et al. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.
[9] Motoaki Kawanabe,et al. How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..
[10] J. Friedman. Greedy function approximation: A gradient boosting machine. , 2001 .
[11] Jure Leskovec,et al. Interpretable Decision Sets: A Joint Framework for Description and Prediction , 2016, KDD.
[12] Cynthia Rudin,et al. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model , 2015, ArXiv.
[13] Niklas Elmqvist,et al. Graphical Perception of Multiple Time Series , 2010, IEEE Transactions on Visualization and Computer Graphics.
[14] R. Tibshirani,et al. Generalized Additive Models , 1991 .
[15] Rich Caruana,et al. Auditing Black-Box Models Using Transparent Model Distillation With Side Information , 2017 .
[16] Daniel Servén,et al. pyGAM: Generalized Additive Models in Python , 2018 .
[17] Rich Caruana,et al. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.
[18] Joachim M. Buhmann,et al. The Balanced Accuracy and Its Posterior Distribution , 2010, 2010 20th International Conference on Pattern Recognition.
[19] Carlos Guestrin,et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.
[20] Johannes Gehrke,et al. Intelligible models for classification and regression , 2012, KDD.
[21] Gerhard Tutz,et al. A comparison of methods for the fitting of generalized additive models , 2008, Stat. Comput..
[22] Torsten Hothorn,et al. Model-Based Boosting , 2015 .
[23] Been Kim,et al. Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.
[24] P. Bühlmann,et al. Boosting with the L2-loss: regression and classification , 2001 .
[25] Miroslav Dudík,et al. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? , 2018, CHI.
[26] Zachary Chase Lipton. The mythos of model interpretability , 2016, ACM Queue.
[27] Y. Freund,et al. Discussion of the Paper \additive Logistic Regression: a Statistical View of Boosting" By , 2000 .
[28] S. Wood. Generalized Additive Models: An Introduction with R , 2006 .
[29] S. Wood. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models , 2011 .
[30] R. Tibshirani. Adaptive piecewise polynomial estimation via trend filtering , 2013, 1304.2986.
[31] Tianqi Chen,et al. XGBoost: A Scalable Tree Boosting System , 2016, KDD.