- D3WA+ - A Case Study of XAIP in a Model Acquisition Task for Dialogue Planning

Recently, the D3WA system was proposed as a paradigm shift in how complex goal-oriented dialogue agents can be specified by taking a declarative view of design. However, it turns out actual users of the system have a hard time evolving their mental model and grasping the imperative consequences of declarative design. In this paper, we adopt ideas from existing works in the field of Explainable AI Planning (XAIP) to provide guidance to the dialogue designer during the model acquisition process. We will highlight in the course of this discussion how the setting presents unique challenges to the XAIP setting, including having to deal with the user persona of a domain modeler rather than the end-user of the system, and consequently having to deal with the unsolvability of models in addition to explaining generated plans.Quickview http://ibm.biz/d3wa-xaip

[1]  Robert Givan,et al.  FF-Replan: A Baseline for Probabilistic Planning , 2007, ICAPS.

[2]  Andreas Herzig,et al.  On the revision of planning tasks , 2014, ECAI.

[3]  Sarathy,et al.  MacGyver Problems: AI Challenges for Testing Resourcefulness and Creativity , 2018 .

[4]  Maria Fox,et al.  VAL: automatic plan validation, continuous effects and mixed initiative planning using PDDL , 2004, 16th IEEE International Conference on Tools with Artificial Intelligence.

[5]  Subbarao Kambhampati,et al.  Hierarchical Expertise Level Modeling for User Specific Contrastive Explanations , 2018, IJCAI.

[6]  Senka Krivic,et al.  Towards Explainable AI Planning as a Service , 2019, ArXiv.

[7]  Daniel Bryce,et al.  Maintaining Evolving Domain Models , 2016, IJCAI.

[8]  Subbarao Kambhampati,et al.  The Emerging Landscape of Explainable Automated Planning & Decision Making , 2020, IJCAI.

[9]  Edmund M. Clarke,et al.  Counterexample-guided abstraction refinement , 2003, 10th International Symposium on Temporal Representation and Reasoning, 2003 and Fourth International Conference on Temporal Logic. Proceedings..

[10]  Jörg Hoffmann,et al.  Search and Learn: On Dead-End Detectors, the Traps they Set, and Trap Learning , 2017, IJCAI.

[11]  Mausam,et al.  SixthSense: Fast and Reliable Recognition of Dead Ends in MDPs , 2010, AAAI.

[12]  Carmel Domshlak,et al.  Landmarks, Critical Paths and Abstractions: What's the Difference Anyway? , 2009, ICAPS.

[13]  Erez Karpas,et al.  Equi-Reward Utility Maximizing Design in Stochastic Environments , 2017, IJCAI.

[14]  Jörg Hoffmann,et al.  Ordered Landmarks in Planning , 2004, J. Artif. Intell. Res..

[15]  Hector Geffner,et al.  Plan Recognition as Planning , 2009, IJCAI.

[16]  Rachel K. E. Bellamy,et al.  Planning and visualization for a smart meeting room assistant , 2019, AI Commun..

[17]  Christian J. Muise,et al.  Improved Non-Deterministic Planning by Exploiting State Relevance , 2012, ICAPS.

[18]  Tathagata Chakraborti,et al.  Planning for Goal-Oriented Dialogue Systems , 2019, ArXiv.

[19]  Susanne Biundo-Stephan,et al.  Making Hybrid Plans More Clear to Human Users - A Formal Approach for Generating Sound Explanations , 2012, ICAPS.

[20]  Malte Helmert,et al.  The Fast Downward Planning System , 2006, J. Artif. Intell. Res..

[21]  David E. Smith Planning as an Iterative Process , 2012, AAAI.

[22]  Subbarao Kambhampati,et al.  Why Can't You Do That HAL? Explaining Unsolvability of Planning Tasks , 2019, IJCAI.

[23]  Yasaman Khazaeni,et al.  D3BA: A Tool for Optimizing Business Processes Using Non-Deterministic Planning , 2020, Business Process Management Workshops.

[24]  Susanne Biundo-Stephan,et al.  User-Centered Planning - A Discussion on Planning in the Presence of Human Users , 2015, ISCT.

[25]  Bernhard Nebel,et al.  Coming up With Good Excuses: What to do When no Plan Can be Found , 2010, Cognitive Robotics.

[26]  Malte Helmert,et al.  Sound and Complete Landmarks for And/Or Graphs , 2010, ECAI.

[27]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[28]  Marco Pistore,et al.  Weak, strong, and strong cyclic planning via symbolic model checking , 2003, Artif. Intell..

[29]  Akihiro Kishimoto,et al.  Generating Dialogue Agents via Automated Planning , 2019, ArXiv.

[30]  Tathagata Chakraborti,et al.  MAi: An Intelligent Model Acquisition Interface for Interactive Specification of Dialogue Agents , 2019, AAAI.

[31]  Christian Muise,et al.  Exploiting Relevance to Improve Robustness and Flexibility in Plan Generation and Execution , 2014 .

[32]  Vasanth Sarathy,et al.  Real World Problem-Solving , 2018, Front. Hum. Neurosci..

[33]  Maria Fox,et al.  Explainable Planning , 2017, ArXiv.

[34]  Daniele Magazzeni,et al.  Explainable AI Planning (XAIP): Overview and the Case of Contrastive Explanation (Extended Abstract) , 2019, Reasoning Web.

[35]  Malte Helmert,et al.  Unsolvability Certificates for Classical Planning , 2017, ICAPS.