AI explainability 360: hands-on tutorial

This tutorial will teach participants to use and contribute to a new open-source Python package named AI Explainability 360 (AIX360) (https://aix360.mybluemix.net), a comprehensive and extensible toolkit that supports interpretability and explainability of data and machine learning models. Motivation for the toolkit. The AIX360 toolkit illustrates that there is no single approach to explainability that works best for all situations. There are many ways to explain: data vs. model, direct vs. post-hoc explanation, local vs. global, etc. The toolkit includes ten state of the art algorithms that cover different dimensions of explanations along with proxy explainability metrics. Moreover, one of our prime objectives is for AIX360 to serve as an educational tool even for non-machine learning experts (viz. social scientists, healthcare experts). To this end, the toolkit has an interactive demonstration, highly descriptive Jupyter notebooks covering diverse real-world use cases, and guidance materials, all helping one navigate the complex explainability space. Compared to existing open-source efforts on AI explainability, AIX360 takes a step forward in focusing on a greater diversity of ways of explaining, usability in industry, and software engineering. By integrating these three aspects, we hope that AIX360 will attract researchers in AI explainability and help translate our collective research results for practicing data scientists and developers deploying solutions in a variety of industries. Regarding the first aspect of diversity, Table 1 in [1] compares AIX360 to existing toolkits in terms of the types of explainability methods offered. The table shows that AIX360 not only covers more types of methods but also has metrics which can act as proxies for judging the quality of explanations. Regarding the second aspect of industry usage, AIX360 illustrates how these explainability algorithms can be applied in specific contexts (please see Audience, goals, and outcomes below). In just a few months since its initial release, the AIX360 toolkit already has a vibrant slack community with over 120 members and has been forked almost 80 times accumulating over 400 stars. This response leads us to believe that there is significant interest in the community in learning more about the toolkit and explainability in general. Audience, goals, and outcomes. The presentations in the tutorial will be aimed at an audience with different backgrounds and computer science expertise levels. For all audience members and especially those unfamiliar with Python programming, AIX360 provides an interactive experience (http://aix360.mybluemix.net/data) centered around a credit approval scenario as a gentle and grounded introduction to the concepts and capabilities of the toolkit. We will also teach all participants which type of explainability algorithm is most appropriate for a given use case, not only for those in the toolkit but also from the broader explainability literature. Knowing which explainability algorithms apply to which contexts and understanding when to use them can benefit most people, regardless of their technical background. The second part of the tutorial will consist of three use cases featuring different industry domains and explanation methods. Data scientists and developers can gain hands-on experience with the toolkit by running and modifying Jupyter notebooks, while others will be able to follow along by viewing rendered versions of the notebooks. Here is a rough agenda of the tutorial: 1) Overture: Provide a brief introduction to the area of explainability as well as introduce common terms. 2) Interactive Web Experience: The AIX360 interactive web experience (http://aix360.mybluemix.net/data) is intended to show a non-computer science audience how different explainability methods may suit different stakeholders in a credit approval scenario (data scientists, loan officers, and bank customers). 3) Taxonomy: We will next present a taxonomy that we have created for organizing the space of explanations and guiding practitioners toward an appropriate choice for their applications. 4) Installation: We will transition into a Python environment and ask participants to install the AIX360 package on their machines using provided instructions. 5) Example Use Cases in Finance, Government, and Healthcare: We will take participants through three use-cases in various application domains in the form of Jupyter notebooks. 6) Metrics: We will briefly showcase the two explainability metrics currently available through the toolkit. 7) Future Directions: The final segment will be to discuss future directions and how participants can contribute to the toolkit.