Explainable Reinforcement Learning Through a Causal Lens

Prevalent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen. In this paper, we use causal models to derive causal explanations of behaviour of reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigated: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.

[1]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[2]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[3]  S. Sloman Causal Models: How People Think about the World and Its Alternatives , 2005 .

[4]  Michael A. Becker Social Psychology: Handbook of Basic Principles , 1998 .

[5]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[6]  F. Elizalde,et al.  Policy Explanation in Factored Markov Decision Processes , 2008 .

[7]  Ning Wang,et al.  Trust calibration within a human-robot team: Comparing automatically generated explanations , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[8]  Anind K. Dey,et al.  Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.

[9]  Tom Schaul,et al.  StarCraft II: A New Challenge for Reinforcement Learning , 2017, ArXiv.

[10]  Luis Enrique Sucar,et al.  Expert Evaluation of Probabilistic Explanations , 2009, ExaCt.

[11]  Judea Pearl,et al.  Counterfactuals and Policy Analysis in Structural Models , 1995, UAI.

[12]  G. Clore,et al.  Social Psychology: Handbook of Basic Principles , 1996 .

[13]  J. Pearl,et al.  The Book of Why: The New Science of Cause and Effect , 2018 .

[14]  Elias Bareinboim,et al.  Fairness in Decision-Making - The Causal Explanation Formula , 2018, AAAI.

[15]  Changhe Yuan,et al.  Most Relevant Explanation in Bayesian Networks , 2011, J. Artif. Intell. Res..

[16]  Mark A. Neerincx,et al.  Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences , 2018, IJCAI 2018.

[17]  Alan Fern,et al.  Explainable Reinforcement Learning via Reward Decomposition , 2019 .

[18]  Gary Klein,et al.  Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.

[19]  Michael D. Buhrmester,et al.  Amazon's Mechanical Turk , 2011, Perspectives on psychological science : a journal of the Association for Psychological Science.

[20]  Pascal Poupart,et al.  Minimal Sufficient Explanations for Factored Markov Decision Processes , 2009, ICAPS.

[21]  J. Woodward,et al.  Scientific Explanation and the Causal Structure of the World , 1988 .

[22]  Manfred Tscheligi,et al.  Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction , 2017, HRI.

[23]  Gary Klein,et al.  Explaining Explanation, Part 3: The Causal Landscape , 2018, IEEE Intelligent Systems.

[24]  Prashan Madumal Explainable Agency in Intelligent Agents: Doctoral Consortium , 2019, AAMAS.

[25]  Tim Miller,et al.  A Grounded Interaction Protocol for Explainable Artificial Intelligence , 2019, AAMAS.

[26]  Maartje M. A. de Graaf,et al.  How People Explain Action (and Autonomous Intelligent Systems Should Too) , 2017, AAAI Fall Symposia.

[27]  Hiroshi Yamakawa,et al.  Autonomous Self-Explanation of Behavior for Interactive Reinforcement Learning Agents , 2017, HAI.

[28]  Ruth M. J. Byrne,et al.  Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning , 2019, IJCAI.

[29]  Ofra Amir,et al.  HIGHLIGHTS: Summarizing Agent Behavior to People , 2018, AAMAS.

[30]  Bradley Hayes,et al.  Improving Robot Controller Transparency Through Autonomous Policy Explanation , 2017, 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI.

[31]  Tim Miller,et al.  Contrastive explanation: a structural-model approach , 2018, The Knowledge Engineering Review.

[32]  John McClure,et al.  For you can't always get what you want: When preconditions are better explanations than goals , 1997 .

[33]  B. Chandrasekaran,et al.  Explaining control strategies in problem solving , 1989, IEEE Expert.

[34]  André Elisseeff,et al.  Explanation Trees for Causal Bayesian Networks , 2008, UAI.

[35]  Youri Coppens,et al.  Distilling Deep Reinforcement Learning Policies in Soft Decision Trees , 2019, IJCAI 2019.

[36]  Joseph Y. Halpern,et al.  Causes and Explanations: A Structural-Model Approach. Part II: Explanations , 2001, The British Journal for the Philosophy of Science.

[37]  Joseph Y. Halpern,et al.  Causes and explanations: A structural-model approach , 2000 .