Puppet Search: Enhancing Scripted Behavior by Look-Ahead Search with Applications to Real-Time Strategy Games

Real-Time Strategy (RTS) games have shown to be very resilient to standard adversarial tree search techniques. Recently, a few approaches to tackle their complexity have emerged that use game state or move abstractions, or both. Unfortunately, the supporting experiments were either limited to simpler RTS environments (μRTS, SparCraft) or lack testing against state-of-the-art game playing agents. Here, we propose Puppet Search, a new adversarial search framework based on scripts that can expose choice points to a look-ahead search procedure. Selecting a combination of a script and decisions for its choice points represents a move to be applied next. Such moves can be executed in the actual game, thus letting the script play, or in an abstract representation of the game state which can be used by an adversarial tree search algorithm. Puppet Search returns a principal variation of scripts and choices to be executed by the agent for a

[1]  Santiago Ontañón,et al.  Learning from Demonstration and Case-Based Planning for Real-Time Strategy Games , 2008, Soft Computing Applications in Industry.

[2]  Michael Buro,et al.  Hierarchical Adversarial Search Applied to Real-Time Strategy Games , 2014, AIIDE.

[3]  Michael Buro,et al.  Heuristic Search Applied to Abstract Combat Games , 2005, Canadian Conference on AI.

[4]  Arnav Jhala,et al.  A Particle Model for State Estimation in Real-Time Strategy Games , 2011, AIIDE.

[5]  Santiago Ontañón,et al.  The Combinatorial Multi-Armed Bandit Problem and Its Application to Real-Time Strategy Games , 2013, AIIDE.

[6]  Michael Buro,et al.  Fast Heuristic Search for RTS Game Combat Scenarios , 2012, AIIDE.

[7]  Michael Buro,et al.  Build Order Optimization in StarCraft , 2011, AIIDE.

[8]  Michael Buro,et al.  Using Lanchester Attrition Laws for Combat Prediction in StarCraft , 2021, AIIDE.

[9]  Pierre Bessière,et al.  A Bayesian Model for Plan Recognition in RTS Games Applied to StarCraft , 2011, AIIDE.

[10]  Michael Mateas,et al.  A data mining approach to strategy prediction , 2009, 2009 IEEE Symposium on Computational Intelligence and Games.

[11]  Santiago Ontañón,et al.  Game-Tree Search over High-Level Game States in RTS Games , 2014, AIIDE.

[12]  Santiago Ontañón,et al.  A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft , 2013, IEEE Transactions on Computational Intelligence and AI in Games.

[13]  Gabriel Synnaeve,et al.  A Bayesian model for opening prediction in RTS games with application to StarCraft , 2011, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11).

[14]  Michael Buro,et al.  Global State Evaluation in StarCraft , 2014, AIIDE.

[15]  Michael Buro,et al.  Introducing Hierarchical Adversarial Search, a Scalable Search Procedure for Real-Time Strategy Games , 2014, ECAI.

[16]  Michael Buro,et al.  Portfolio greedy search and simulation for large-scale combat in starcraft , 2013, 2013 IEEE Conference on Computational Inteligence in Games (CIG).

[17]  Albert L. Zobrist,et al.  A New Hashing Method with Application for Game Playing , 1990 .

[18]  Santiago Ontañón,et al.  High-Level Representations for Game-Tree Search in RTS Games , 2014, Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.

[19]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.