Different "Intelligibility" for Different Folks

Many arguments have concluded that our autonomous technologies must be intelligible, interpretable, or explainable, even if that property comes at a performance cost. In this paper, we consider the reasons why some property like these might be valuable, we conclude that there is not simply one kind of 'intelligibility', but rather different types for different individuals and uses. In particular, different interests and goals require different types of intelligibility (or explanations, or other related notion). We thus provide a typography of 'intelligibility' that distinguishes various notions, and draw methodological conclusions about how autonomous technologies should be designed and deployed in different ways, depending on whose intelligibility is required.

[1]  Karolin Baecker,et al.  Inference to the Best Explanation: , 2021, The Material Theory of Induction.

[2]  Joost Vennekens,et al.  A principled approach to defining actual causation , 2016, Synthese.

[3]  R. A. Carlson,et al.  Acquisition of intellectual and perceptual-motor skills. , 2001, Annual review of psychology.

[4]  Keith A. Markus,et al.  Making Things Happen: A Theory of Causal Explanation , 2007 .

[5]  Wojciech Samek,et al.  Methods for interpreting and understanding deep neural networks , 2017, Digit. Signal Process..

[6]  David Danks,et al.  “Trust but Verify”: The Difficulty of Trusting Autonomous Weapons Systems , 2018 .

[7]  Zachary C. Lipton,et al.  The mythos of model interpretability , 2018, Commun. ACM.

[8]  T. Lombrozo Explanatory Preferences Shape Learning and Inference , 2016, Trends in Cognitive Sciences.

[9]  J. Woodward Making Things Happen: A Theory of Causal Explanation , 2003 .

[10]  Brian E. Ruttenberg,et al.  Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations , 2018, ArXiv.

[11]  Alun D. Preece,et al.  Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems , 2018, ArXiv.

[12]  J. Woodward,et al.  Scientific Explanation and the Causal Structure of the World , 1988 .

[13]  Alex John London,et al.  Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. , 2019, The Hastings Center report.

[14]  Cynthia Rudin,et al.  Please Stop Explaining Black Box Models for High Stakes Decisions , 2018, ArXiv.

[15]  David Danks,et al.  Adaptively Rational Learning , 2015, Minds and Machines.

[16]  Richard Scheines,et al.  Learning the Structure of Linear Latent Variable Models , 2006, J. Mach. Learn. Res..

[17]  Xuhui Zhang,et al.  Applying Dependency Patterns in Causal Discovery of Latent Variable Models , 2017, ACALCI.

[18]  A. Reber Implicit learning and tacit knowledge , 1993 .

[19]  Derek Doran,et al.  What Does Explainable AI Really Mean? A New Conceptualization of Perspectives , 2017, CEx@AI*IA.

[20]  R. Kennedy,et al.  Defense Advanced Research Projects Agency (DARPA). Change 1 , 1996 .

[21]  Daniel S. Weld,et al.  The challenge of crafting intelligible intelligence , 2018, Commun. ACM.

[22]  Alexandra Chouldechova,et al.  Does mitigating ML's impact disparity require treatment disparity? , 2017, NeurIPS.

[23]  D. Shanks Learning: from association to cognition. , 2010, Annual review of psychology.