Balancing Goal Obfuscation and Goal Legibility in Settings with Cooperative and Adversarial Observers

In order to be useful in the real world, AI agents need to plan and act in the presence of others, who may include adversarial and cooperative entities. In this paper, we consider the problem where an autonomous agent needs to act in a manner that clarifies its objectives to cooperative entities while preventing adversarial entities from inferring those objectives. We show that this problem is solvable when cooperative entities and adversarial entities use different types of sensors and/or prior knowledge. We develop two new solution approaches for computing such plans. One approach provides an optimal solution to the problem by using an IP solver to provide maximum obfuscation for adversarial entities while providing maximum legibility for cooperative entities in the environment, whereas the other approach provides a satisficing solution using heuristic-guided forward search to achieve preset levels of obfuscation and legibility for adversarial and cooperative entities respectively. We show the feasibility and utility of our algorithms through extensive empirical evaluation on problems derived from planning benchmarks.

[1]  DIMITRIOS PIERRAKOS,et al.  User Modeling and User-Adapted Interaction , 2004, User Modeling and User-Adapted Interaction.

[2]  Blai Bonet,et al.  A Concise Introduction to Models and Methods for Automated Planning , 2013, A Concise Introduction to Models and Methods for Automated Planning.

[3]  Erez Karpas,et al.  Privacy Preserving Plans in Partially Observable Environments , 2016, IJCAI.

[4]  Zhijun Wu Strategies for natural language processing using stratified grammar , 1988 .

[5]  Shirin Sohrabi,et al.  Plan Recognition as Planning Revisited , 2016, IJCAI.

[6]  Miquel Ramírez,et al.  Action Selection for Transparent Planning , 2018, AAMAS.

[7]  Felipe Meneguzzi,et al.  Landmark-Based Heuristics for Goal Recognition , 2017, AAAI.

[8]  Hiroaki Kitano,et al.  Proceedings of the 21st international jont conference on Artifical intelligence , 2009 .

[9]  Erez Karpas,et al.  Goal Recognition Design with Non-Observable Actions , 2016, AAAI.

[10]  Daniel Borrajo,et al.  Counterplanning using Goal Recognition and Landmarks , 2018, IJCAI.

[11]  Sebastian Sardiña,et al.  Cost-Based Goal Recognition for Path-Planning , 2017, AAMAS.

[12]  Sandra Carberry,et al.  Techniques for Plan Recognition , 2001, User Modeling and User-Adapted Interaction.

[13]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[14]  Hector Geffner,et al.  Heuristics for Planning with Action Costs Revisited , 2008, ECAI.

[15]  Yucheng Dong,et al.  A Unified Framework , 2018, Linguistic Decision Making.

[16]  David E. Smith,et al.  A Fast Goal Recognition Technique Based on Interaction Estimates , 2015, IJCAI.

[17]  Ronen I. Brafman,et al.  Representing and Planning with Interacting Actions and Privacy , 2018, ICAPS.

[18]  Erez Karpas,et al.  Goal Recognition Design , 2014, ICAPS.

[19]  Hector Geffner,et al.  Probabilistic Plan Recognition Using Off-the-Shelf Classical Planners , 2010, AAAI.

[20]  George A. Bekey,et al.  On autonomous robots , 1998, The Knowledge Engineering Review.

[21]  Sebastian Sardiña,et al.  Deceptive Path-Planning , 2017, IJCAI.

[22]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[23]  Yu Zhang,et al.  Planning with Resource Conflicts in Human-Robot Cohabitation , 2016, AAMAS.