AI for Explaining Decisions in Multi-Agent Environments

Explanation is necessary for humans to understand and accept decisions made by an AI system when the system's goal is known. It is even more important when the AI system makes decisions in multi-agent environments where the human does not know the systems' goals since they may depend on other agents' preferences. In such situations, explanations should aim to increase user satisfaction, taking into account the system's decision, the user's and the other agents' preferences, the environment settings and properties such as fairness, envy and privacy. Generating explanations that will increase user satisfaction is very challenging; to this end, we propose a new research direction: xMASE. We then review the state of the art and discuss research directions towards efficient methodologies and algorithms for generating explanations that will increase users' satisfaction from AI system's decisions in multi-agent environments.

[1]  Tomi Peltola Local Interpretable Model-agnostic Explanations of Bayesian Predictive Models via Kullback-Leibler Projections , 2018, ArXiv.

[2]  Avanti Shrikumar,et al.  Learning Important Features Through Propagating Activation Differences , 2017, ICML.

[4]  Adnan Darwiche,et al.  On the Relative Expressiveness of Bayesian and Neural Networks , 2018, PGM.

[5]  Sarit Kraus,et al.  Strategical Argumentative Agent for Human Persuasion , 2016, ECAI.

[6]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[7]  Sarit Kraus,et al.  Predicting Human Decision-Making: From Prediction to Action , 2018, Predicting Human Decision-Making.

[8]  Carlos Uzcátegui,et al.  Preferences and explanations , 2003, Artif. Intell..

[9]  Alexander Binder,et al.  On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation , 2015, PloS one.

[10]  Jure Leskovec,et al.  Faithful and Customizable Explanations of Black Box Models , 2019, AIES.

[11]  Gwenn Englebienne,et al.  How model accuracy and explanation fidelity influence user trust , 2019, IJCAI 2019.

[12]  Claudia V. Goldman,et al.  Online Prediction of Exponential Decay Time Series with Human-Agent Application , 2016, ECAI.

[13]  Cristina Conati,et al.  Exploring the Need for Explainable Artificial Intelligence (XAI) in Intelligent Tutoring Systems (ITS) , 2019, IUI Workshops.

[14]  Alison Cawsey Planning interactive explanations , 1993 .

[15]  Sarit Kraus,et al.  Providing Arguments in Discussions on the Basis of the Prediction of Human Argumentative Behavior , 2016, ACM Trans. Interact. Intell. Syst..

[16]  Bob L. Sturm,et al.  Local Interpretable Model-Agnostic Explanations for Music Content Analysis , 2017, ISMIR.

[17]  Richard Stottler,et al.  Explaining Complex Scheduling Decisions , 2018, IUI Workshops.

[18]  Jiliang Tang,et al.  A Survey on Dialogue Systems: Recent Advances and New Frontiers , 2017, SKDD.

[19]  H. Chad Lane,et al.  Building Explainable Artificial Intelligence Systems , 2006, AAAI.

[20]  Eric P. Xing,et al.  Harnessing Deep Neural Networks with Logic Rules , 2016, ACL.

[21]  Avi Rosenfeld,et al.  Explainability in human–agent systems , 2019, Autonomous Agents and Multi-Agent Systems.

[22]  B. Sparks,et al.  DEALING WITH SERVICE FAILURES: THE USE OF EXPLANATIONS , 2009 .

[23]  Michael Anderson,et al.  Representation, justification, and explanation in a value-driven agent: an argumentation-based approach , 2018, AI and Ethics.

[24]  Tim Miller,et al.  Towards a Grounded Dialog Model for Explainable Artificial Intelligence , 2018, ArXiv.

[25]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[26]  Jure Leskovec,et al.  Interpretable & Explorable Approximations of Black Box Models , 2017, ArXiv.

[27]  Been Kim,et al.  Considerations for Evaluation and Generalization in Interpretable Machine Learning , 2018 .

[28]  Sarit Kraus,et al.  Providing explanations for recommendations in reciprocal environments , 2018, RecSys.

[29]  Jung Hoon Lee,et al.  Complementary reinforcement learning towards explainable agents , 2019, ArXiv.

[30]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[31]  Franco Turini,et al.  Meaningful Explanations of Black Box AI Decision Systems , 2019, AAAI.

[32]  Amos Azaria,et al.  Human Satisfaction as the Ultimate Goal in Ridesharing , 2018, Future Gener. Comput. Syst..

[33]  Christine T. Wolf,et al.  Explainability in Context: Lessons from an Intelligent System in the IT Services Domain , 2019, IUI Workshops.

[34]  Francesca Toni,et al.  Argumentation for Explainable Scheduling , 2019, AAAI.

[35]  Maria Fox,et al.  Explainable Planning , 2017, ArXiv.

[36]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[37]  Sarit Kraus,et al.  Emergency Department Online Patient-Caregiver Scheduling , 2019, AAAI.

[38]  Luigi Fabbris,et al.  Comparison of four common data collection techniques to elicit preferences , 2017, Quality & Quantity.

[39]  Jun Wang,et al.  Efficient Ridesharing Order Dispatching with Mean Field Multi-Agent Reinforcement Learning , 2019, WWW.

[40]  Abhishek Das,et al.  Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[41]  John Riedl,et al.  Explaining collaborative filtering recommendations , 2000, CSCW '00.