Optimal Patrol Planning for Green Security Games with Black-Box Attackers

Motivated by the problem of protecting endangered animals, there has been a surge of interests in optimizing patrol planning for conservation area protection. Previous efforts in these domains have mostly focused on optimizing patrol routes against a specific boundedly rational poacher behavior model that describes poachers’ choices of areas to attack. However, these planning algorithms do not apply to other poaching prediction models, particularly, those complex machine learning models which are recently shown to provide better prediction than traditional bounded-rationality-based models. Moreover, previous patrol planning algorithms do not handle the important concern whereby poachers infer the patrol routes by partially monitoring the rangers’ movements. In this paper, we propose OPERA, a general patrol planning framework that: (1) generates optimal implementable patrolling routes against a black-box attacker which can represent a wide range of poaching prediction models; (2) incorporates entropy maximization to ensure that the generated routes are more unpredictable and robust to poachers’ partial monitoring. Our experiments on a real-world dataset from Uganda’s Queen Elizabeth Protected Area (QEPA) show that OPERA results in better defender utility, more efficient coverage of the area and more unpredictability than benchmark algorithms and the past routes used by rangers at QEPA.

[1]  William D. Moreto To conserve and protect: examining law enforcement ranger culture and operations in Queen Elizabeth National Park, Uganda , 2013 .

[2]  Milind Tambe,et al.  When Security Games Go Green: Designing Defender Strategies to Prevent Poaching and Illegal Fishing , 2015, IJCAI.

[3]  G. Nemhauser,et al.  Integer Programming , 2020 .

[4]  Andrew J. Plumptre,et al.  Improving Law‐Enforcement Effectiveness and Efficiency in Protected Areas Using Ranger‐collected Monitoring Data , 2017 .

[5]  Milind Tambe,et al.  Cloudy with a Chance of Poaching: Adversary Behavior Modeling and Forecasting with Real-World Poaching Data , 2017, AAMAS.

[6]  Mohit Singh,et al.  Entropy, optimization and counting , 2013, STOC.

[7]  Haifeng Xu,et al.  The Curse of Correlation in Security Games and Principle of Max-Entropy , 2017, ArXiv.

[8]  Milind Tambe,et al.  CAPTURE: A New Predictive Anti-Poaching Tool for Wildlife Protection , 2016, AAMAS.

[9]  Taghi M. Khoshgoftaar,et al.  RUSBoost: A Hybrid Approach to Alleviating Class Imbalance , 2010, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[10]  Milind Tambe,et al.  Taking It for a Test Drive: A Hybrid Spatio-Temporal Model for Wildlife Poaching Prediction Evaluated Through a Controlled Field Test , 2017, ECML/PKDD.

[11]  Bo An,et al.  PROTECT: a deployed game theoretic system to protect the ports of the United States , 2012, AAMAS.

[12]  L Boitani,et al.  A Retrospective Evaluation of the Global Decline of Carnivores and Ungulates , 2014, Conservation biology : the journal of the Society for Conservation Biology.

[13]  Amos Azaria,et al.  Analyzing the Effectiveness of Adversary Modeling in Security Games , 2013, AAAI.

[14]  Bo An,et al.  Deploying PAWS: Field Optimization of the Protection Assistant for Wildlife Security , 2016, AAAI.

[15]  Milind Tambe,et al.  TRUSTS: Scheduling Randomized Patrols for Fare Inspection in Transit Systems , 2012, IAAI.

[16]  Timothy C. Haas,et al.  Optimal patrol routes: interdicting and pursuing rhino poachers , 2018 .

[17]  Vincent R. Nyirenda,et al.  Field foot patrol effectiveness in Kafue National Park, Zambia , 2012 .

[18]  Milind Tambe,et al.  Three Strategies to Success: Learning Adversary Models in Security Games , 2016, IJCAI.

[19]  Milind Tambe,et al.  PAWS: Game Theory Based Protection Assistant for Wildlife Security , 2017 .

[20]  Rong Yang,et al.  Adaptive resource allocation for wildlife protection against illegal poachers , 2014, AAMAS.

[21]  Mark Goadrich,et al.  The relationship between Precision-Recall and ROC curves , 2006, ICML.