InterpretML: A Unified Framework for Machine Learning Interpretability

InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from github.com/microsoft/interpret.

[1]  Sucheta Soundarajan,et al.  Equal Protection Under the Algorithm : A Legal-Inspired Framework for Identifying Discrimination in Machine Learning , 2018 .

[2]  Petr Hájek Interpretable Fuzzy Rule-Based Systems for Detecting Financial Statement Fraud , 2019, AIAI.

[3]  Johannes Gehrke,et al.  Accurate intelligible models with pairwise interactions , 2013, KDD.

[4]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[5]  Ankur Teredesai,et al.  Interpretable Machine Learning in Healthcare , 2018, BCB.

[6]  R. Tibshirani,et al.  Generalized Additive Models: Some Applications , 1987 .

[7]  Will Usher,et al.  SALib: An open-source Python library for Sensitivity Analysis , 2017, J. Open Source Softw..

[8]  Rich Caruana,et al.  Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation , 2017, AIES.

[9]  Johannes Gehrke,et al.  Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.

[10]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[11]  Johannes Gehrke,et al.  Intelligible models for classification and regression , 2012, KDD.

[12]  Steven M. Drucker,et al.  Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models , 2019, CHI.

[13]  Cynthia Rudin,et al.  An Interpretable Model with Globally Consistent Explanations for Credit Risk , 2018, ArXiv.

[14]  Rich Caruana,et al.  Interpretability is Harder in the Multiclass Setting: Axiomatic Interpretability for Multiclass Additive Models , 2018, ArXiv.

[15]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[16]  Tianqi Chen,et al.  XGBoost: A Scalable Tree Boosting System , 2016, KDD.

[17]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[18]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.