Monotonic Maximin: A Robust Stackelberg Solution against Boundedly Rational Followers

There has been recent interest in applying Stackelberg games to infrastructure security, in which a defender must protect targets from attack by an adaptive adversary. In real-world security settings the adversaries are humans and are thus boundedly rational. Most existing approaches for computing defender strategies against boundedly rational adversaries try to optimize against specific behavioral models of adversaries, and provide no quality guarantee when the estimated model is inaccurate. We propose a new solution concept, monotonic maximin, which provides guarantees against all adversary behavior models satisfying monotonicity, including all in the family of Regular Quantal Response functions. We propose a mixed-integer linear program formulation for computing monotonic maximin. We also consider top-monotonic maximin, a related solution concept that is more conservative, and propose a polynomial-time algorithm for top-monotonic maximin.

[1]  Nicola Basilico,et al.  Leader-follower strategies for robotic patrolling in environments with arbitrary topologies , 2009, AAMAS.

[2]  R. Luce,et al.  Individual Choice Behavior: A Theoretical Analysis. , 1960 .

[3]  Sarit Kraus,et al.  Multi-robot perimeter patrol in adversarial settings , 2008, 2008 IEEE International Conference on Robotics and Automation.

[4]  Rong Yang,et al.  Computing optimal strategy against quantal response in security games , 2012, AAMAS.

[5]  D. McFadden Conditional logit analysis of qualitative choice behavior , 1972 .

[6]  Milind Tambe Security and Game Theory - Algorithms, Deployed Systems, Lessons Learned , 2011 .

[7]  Vincent Conitzer,et al.  Complexity of Computing Optimal Stackelberg Strategies in Security Resource Allocation Games , 2010, AAAI.

[8]  Milind Tambe,et al.  TRUSTS: Scheduling Randomized Patrols for Fare Inspection in Transit Systems Using Game Theory , 2012, AI Mag..

[9]  John C. Harsanyi Games with Incomplete Information Played by "Bayesian" Players, I-III: Part I. The Basic Model& , 2004, Manag. Sci..

[10]  Manish Jain,et al.  Risk-Averse Strategies for Security Games with Execution and Observational Uncertainty , 2011, AAAI.

[11]  Laurent El Ghaoui,et al.  Robust Optimization , 2009, Princeton Series in Applied Mathematics.

[12]  A. Wald Statistical Decision Functions Which Minimize the Maximum Risk , 1945 .

[13]  Philip A. Haile,et al.  On the Empirical Content of Quantal Response Equilibrium , 2003 .

[14]  Vincent Conitzer,et al.  Computing the optimal strategy to commit to , 2006, EC '06.

[15]  Rong Yang,et al.  A robust approach to addressing human adversaries in security games , 2012, AAMAS.

[16]  Rong Yang,et al.  Improving Resource Allocation Strategy against Human Adversaries in Security Games , 2011, IJCAI.

[17]  R. Duncan Luce,et al.  Individual Choice Behavior: A Theoretical Analysis , 1979 .

[18]  Vladik Kreinovich,et al.  Security games with interval uncertainty , 2013, AAMAS.

[19]  T. Palfrey,et al.  Regular Quantal Response Equilibrium , 2005 .

[20]  R. McKelvey,et al.  Quantal Response Equilibria for Normal Form Games , 1995 .

[21]  Milind Tambe,et al.  A unified method for handling discrete and continuous uncertainty in Bayesian Stackelberg games , 2012, AAMAS.

[22]  Milind Tambe,et al.  Effective solutions for real-world Stackelberg games: when agents must deal with human uncertainties , 2009, AAMAS 2009.

[23]  Manish Jain,et al.  Computing optimal randomized resource allocations for massive security games , 2009, AAMAS 2009.

[24]  Manish Jain,et al.  Game theory for security: Key algorithmic principles, deployed systems, lessons learned , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).