Causal Explanations for Stochastic Sequential Multi-Agent Decision-Making

We present CEMA: Causal Explanations for Multi-Agent decision-making; a system to generate causal explanations for agents' decisions in stochastic sequential multi-agent environments. The core of CEMA is a novel causal selection method which, unlike prior work that assumes a specific causal structure, is applicable whenever a probabilistic model for predicting future states of the environment is available. We sample counterfactual worlds with this model which are used to identify and rank the salient causes behind decisions. We also designed CEMA to meet the requirements of social explainable AI. It can generate contrastive explanations based on selected causes and it works as an interaction loop with users to assure relevance and intelligibility for them. We implement CEMA for motion planning for autonomous driving and test it in four diverse simulated scenarios. We show that CEMA correctly and robustly identifies the relevant causes behind decisions and delivers relevant explanations to users' queries.

[1]  J. Ser,et al.  Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence , 2023, Inf. Fusion.

[2]  Xinyan Zhou,et al.  Tactical-Level Explanation is Not Enough: Effect of Explaining AV’s Lane-Changing Decisions on Drivers’ Decision-Making, Trust, and Emotional Experience , 2022, Int. J. Hum. Comput. Interact..

[3]  Kaley J. Rittichier,et al.  Trustworthy Artificial Intelligence: A Review , 2022, ACM Comput. Surv..

[4]  Balint Gyevnar,et al.  Aligning Explainable AI and the Law: The European Perspective , 2023, ArXiv.

[5]  Shay B. Cohen,et al.  A Human-Centric Method for Generating Causal Explanations in Natural Language for Autonomous Vehicle Motion Planning , 2022, ArXiv.

[6]  M. Palmirani,et al.  Metrics, Explainability and the European AI Act Proposal , 2022, J.

[7]  Bing Qin,et al.  A survey of discourse parsing , 2022, Frontiers of Computer Science.

[8]  Jie Song,et al.  A Survey on Explainable Reinforcement Learning: Concepts, Algorithms, Challenges , 2022, ArXiv.

[9]  Marina Jirotka,et al.  Towards Accountability: Providing Intelligible Explanations in Autonomous Driving , 2021, 2021 IEEE Intelligent Vehicles Symposium (IV).

[10]  Manuel Gomez-Rodriguez,et al.  Counterfactual Explanations in Sequential Decision Making Under Uncertainty , 2021, NeurIPS.

[11]  Marcin Detyniecki,et al.  Understanding Prediction Discrepancies in Machine Learning Classifiers , 2021, ArXiv.

[12]  Stefano V. Albrecht,et al.  GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving , 2021, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[13]  Marco F. Huber,et al.  A Survey on the Explainability of Supervised Machine Learning , 2020, J. Artif. Intell. Res..

[14]  Stefano V. Albrecht,et al.  Interpretable Goal-based Prediction and Planning for Autonomous Driving , 2020, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[15]  Henry Leung,et al.  A Review of Deep Learning Models for Time Series Prediction , 2019, IEEE Sensors Journal.

[16]  Alberto Ferreira de Souza,et al.  Self-Driving Cars: A Survey , 2019, Expert Syst. Appl..

[17]  Eric D. Ragan,et al.  A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems , 2018, ACM Trans. Interact. Intell. Syst..

[18]  Ilia Stepin,et al.  A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence , 2021, IEEE Access.

[19]  Katherine Driggs Campbell,et al.  To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles , 2020, ArXiv.

[20]  Sung-Hyun Son,et al.  Optimization Methods for Interpretable Differentiable Decision Trees Applied to Reinforcement Learning , 2020, AISTATS.

[21]  Subbarao Kambhampati,et al.  The Emerging Landscape of Explainable Automated Planning & Decision Making , 2020, IJCAI.

[22]  Mark O. Riedl,et al.  Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach , 2020, HCI.

[23]  Tim Miller,et al.  Explainable Reinforcement Learning Through a Causal Lens , 2019, AAAI.

[24]  Vitaly Kuznetsov,et al.  Foundations of Sequence-to-Sequence Modeling for Time Series , 2018, AISTATS.

[25]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[26]  Trevor Darrell,et al.  Textual Explanations for Self-Driving Vehicles , 2018, ECCV.

[27]  Javier Alonso-Mora,et al.  Planning and Decision-Making for Autonomous Vehicles , 2018, Annu. Rev. Control. Robotics Auton. Syst..

[28]  Abhinav Verma,et al.  Programmatically Interpretable Reinforcement Learning , 2018, ICML.

[29]  Dawn M. Tilbury,et al.  Explanations and Expectations: Trust Building in Automated Vehicles , 2018, HRI.

[30]  Jiliang Tang,et al.  A Survey on Dialogue Systems: Recent Advances and New Frontiers , 2017, SKDD.

[31]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[32]  Scott Lundberg,et al.  A Unified Approach to Interpreting Model Predictions , 2017, NIPS.

[33]  Edwin Olson,et al.  Multipolicy decision-making for autonomous driving via changepoint-based behavior prediction: Theory and experiment , 2015, Autonomous Robots.

[34]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[35]  P. Austin An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies , 2011, Multivariate behavioral research.

[36]  Albert Gatt,et al.  SimpleNLG: A Realisation Engine for Practical Applications , 2009, ENLG.

[37]  Keith A. Markus,et al.  Making Things Happen: A Theory of Causal Explanation , 2007 .

[38]  S. Carey,et al.  Functional explanation and the function of explanation , 2006, Cognition.

[39]  B. Malle,et al.  How People Explain Behavior: A New Theoretical Framework , 1999, Personality and social psychology review : an official journal of the Society for Personality and Social Psychology, Inc.

[40]  P. Lautner,et al.  Cause and explanation in ancient Greek thought , 1998 .

[41]  Michael L. Littman,et al.  Algorithms for Sequential Decision Making , 1996 .