Evaluating the Interpretability of the Knowledge Compilation Map: Communicating Logical Statements Effectively

Knowledge compilation techniques translate propositional theories into equivalent forms to increase their computational tractability. But, how should we best present these propositional theories to a human? We analyze the standard taxonomy of propositional theories for relative interpretability across three model domains: highway driving, emergency triage, and the chopsticks game. We generate decision-making agents which produce logical explanations for their actions and apply knowledge compilation to these explanations. Then, we evaluate how quickly, accurately, and confidently users comprehend the generated explanations. We find that domain, formula size, and negated logical connectives significantly affect comprehension while formula properties typically associated with interpretability are not strong predictors of human ability to comprehend the theory.

[1]  Christian J. Muise,et al.  Dsharp: Fast d-DNNF Compilation with sharpSAT , 2012, Canadian Conference on AI.

[2]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[3]  Maarten van Wijk,et al.  Logical connectives in natural language: a cultural evolutionary approach , 2006 .

[4]  Anthony Barrett Model Compilation for Real-Time Planning and Diagnosis with Feedback , 2005, IJCAI.

[5]  Maurits Kaptein,et al.  Using Generalized Linear (Mixed) Models in HCI , 2016 .

[6]  Rebecca A. Grier How High is High? A Meta-Analysis of NASA-TLX Global Workload Scores , 2015 .

[7]  Volume 21 , 2002 .

[8]  T. J. Hodgetts,et al.  Major incident management system : the scene aid memoire for major incident medical management and support , 2002 .

[9]  Bart Baesens,et al.  An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models , 2011, Decis. Support Syst..

[10]  T. Evgeniou,et al.  Disjunctions of Conjunctions, Cognitive Simplicity, and Consideration Sets , 2010 .

[11]  Been Kim,et al.  Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.

[12]  Lalana Kagal,et al.  Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).

[13]  G. A. Miller THE PSYCHOLOGICAL REVIEW THE MAGICAL NUMBER SEVEN, PLUS OR MINUS TWO: SOME LIMITS ON OUR CAPACITY FOR PROCESSING INFORMATION 1 , 1956 .

[14]  S. Hart,et al.  Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research , 1988 .

[15]  Zachary Chase Lipton The mythos of model interpretability , 2016, ACM Queue.

[16]  Alex Alves Freitas,et al.  Comprehensible classification models: a position paper , 2014, SKDD.

[17]  Yimin Liu,et al.  Or's of And's for Interpretable Classification, with Application to Context-Aware Recommender Systems , 2015, ArXiv.

[18]  H. H. Clark,et al.  Mental operations in the comparison of sentences and pictures. , 1972 .

[19]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[20]  Niklas Lavesson,et al.  User-oriented Assessment of Classification Model Understandability , 2011, SCAI.

[21]  Pietro Torasso,et al.  Model-Based Diagnosis Through OBDD Compilation: A Complexity Analysis , 2006, Reasoning, Action and Interaction in AI Theories and Systems.

[22]  Jean-Marie Lagniez,et al.  Preprocessing for Propositional Model Counting , 2014, AAAI.

[23]  Manfred Tscheligi,et al.  Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction , 2017, HRI.