Model Elicitation through Direct Questioning

The future will be replete with scenarios where humans are robots will be working together in complex environments. Teammates interact, and the robot's interaction has to be about getting useful information about the human's (teammate's) model. There are many challenges before a robot can interact, such as incorporating the structural differences in the human's model, ensuring simpler responses, etc. In this paper, we investigate how a robot can interact to localize the human model from a set of models. We show how to generate questions to refine the robot's understanding of the teammate's model. We evaluate the method in various planning domains. The evaluation shows that these questions can be generated offline, and can help refine the model through simple answers.

[1]  Malte Helmert,et al.  The Fast Downward Planning System , 2006, J. Artif. Intell. Res..

[2]  Subbarao Kambhampati,et al.  Understanding and Extending Graphplan , 1997, ECP.

[3]  Yolanda Gil,et al.  Learning by Experimentation: Incremental Refinement of Incomplete Planning Domains , 1994, International Conference on Machine Learning.

[4]  John R. Anderson,et al.  Student Modeling in an Intelligent Programming Tutor , 1993 .

[5]  Subbarao Kambhampati,et al.  Discovering Underlying Plans Based on Shallow Models , 2018, ACM Trans. Intell. Syst. Technol..

[6]  Pulkit Verma,et al.  Asking the Right Questions: Interpretable Action Model Learning using Query-Answering , 2019 .

[7]  Pierre-Yves Oudeyer,et al.  Online Optimization of Teaching Sequences with Multi-Armed Bandits , 2014, EDM.

[8]  Subbarao Kambhampati,et al.  Synthesizing Robust Plans under Incomplete Domain Models , 2011, NIPS.

[9]  Brendan Juba,et al.  Efficient, Safe, and Probably Approximately Complete Learning of Action Models , 2017, IJCAI.

[10]  Daniel Bryce,et al.  Maintaining Evolving Domain Models , 2016, IJCAI.

[11]  Subbarao Kambhampati,et al.  Plan Explanations as Model Reconciliation - An Empirical Study , 2018, ArXiv.

[12]  Robert C. Moore Making the transition to formal proof , 1994 .

[13]  David A. Cohn,et al.  Neural Network Exploration Using Optimal Experiment Design , 1993, NIPS.

[14]  Stephen Cresswell,et al.  Generalised Domain Model Acquisition from Action Traces , 2011, ICAPS.

[15]  Jörg Hoffmann,et al.  Ordered Landmarks in Planning , 2004, J. Artif. Intell. Res..

[16]  Richard Fikes,et al.  STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving , 1971, IJCAI.

[17]  Kurt VanLehn,et al.  Adaptively selecting biology questions generated from a semantic network , 2017, Interact. Learn. Environ..

[18]  Burr Settles,et al.  Active Learning Literature Survey , 2009 .

[19]  Subbarao Kambhampati,et al.  Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation , 2018, ICAPS.

[20]  Qiang Yang,et al.  Learning action models from plan examples using weighted MAX-SAT , 2007, Artif. Intell..

[21]  Tim Miller,et al.  Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..

[22]  Shlomo Argamon,et al.  Committee-Based Sampling For Training Probabilistic Classi(cid:12)ers , 1995 .

[23]  Leonidas J. Guibas,et al.  Deep Knowledge Tracing , 2015, NIPS.

[24]  Malte Helmert,et al.  Sound and Complete Landmarks for And/Or Graphs , 2010, ECAI.

[25]  Jean-Paul Doignon,et al.  Knowledge spaces and learning spaces , 2015, 1511.06757.

[26]  Antonio Jose Garrido del Solo,et al.  Learning Temporal Action Models via Constraint Programming , 2020, ECAI.

[27]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[28]  Subbarao Kambhampati,et al.  Robust planning with incomplete domain models , 2017, Artif. Intell..

[29]  Kurt VanLehn,et al.  How Should Knowledge Composed of Schemas be Represented in Order to Optimize Student Model Accuracy? , 2018, AIED.

[30]  Alberto Camacho,et al.  Learning Interpretable Models Expressed in Linear Temporal Logic , 2019, ICAPS.