HTN fighter: Planning in a highly-dynamic game

This paper proposes a plan creation and execution system used by the agent HTN Fighter in the Fighting ICE game framework. The underlying approach implements a Hierarchical Task Network (HTN) planner and a simple planning domain that focuses on sequences of close-range attacks. The execution process is tightly interleaved with the planning process compensating for the uncertainty caused by the delay of 15 frames which the information about the game world state is provided with. Using an HTN and the proposed execution system, the agent is able to follow high-level strategies staying reactive to changes in the environment. Experiments show that HTN Fighter outperforms the sample MCTS controller and the top three controllers submitted to the 2016 Fighting Game AI Competition.

[1]  Hector Muñoz-Avila,et al.  SHOP: Simple Hierarchical Ordered Planner , 1999, IJCAI.

[2]  Marc Cavazza,et al.  Al in computer games: Survey and perspectives , 2000, Virtual Reality.

[3]  Luiz Chaimowicz,et al.  Discovering Combos in Fighting Games with Evolutionary Algorithms , 2016, GECCO.

[4]  Ruck Thawonmas,et al.  Online adjustment of the AI's strength in a fighting game using the k-nearest neighbor algorithm and a game simulator , 2014, 2014 IEEE 3rd Global Conference on Consumer Electronics (GCCE).

[5]  Dana S. Nau,et al.  Current Trends in Automated Planning , 2007, AI Mag..

[6]  Ruck Thawonmas,et al.  Fighting game artificial intelligence competition platform , 2013, 2013 IEEE 2nd Global Conference on Consumer Electronics (GCCE).

[7]  Ruck Thawonmas,et al.  Applying and Improving Monte-Carlo Tree Search in a Fighting Game AI , 2016, ACE.

[8]  Ruck Thawonmas,et al.  Deduction of fighting-game countermeasures using the k-nearest neighbor algorithm and a game simulator , 2014, 2014 IEEE Conference on Computational Intelligence and Games.

[9]  Yue Cao,et al.  Total-Order Planning with Partially Ordered Subtasks , 2001, IJCAI.

[10]  Hector Muñoz-Avila,et al.  Learning Methods to Generate Good Plans: Integrating HTN Learning and Reinforcement Learning , 2010, AAAI.

[11]  Paolo Traverso,et al.  Blended Planning and Acting: Preliminary Approach, Research Challenges , 2015, AAAI.

[12]  Marco Aiello,et al.  An Overview of Hierarchical Task Network Planning , 2014, ArXiv.

[13]  Dana S. Nau,et al.  SHOP2: An HTN Planning System , 2003, J. Artif. Intell. Res..

[14]  Hector Muñoz-Avila,et al.  Learning hierarchical task network domains from partially observed plan traces , 2014, Artif. Intell..

[15]  Santiago Ontañón,et al.  Adversarial Hierarchical-Task Network Planning for Complex Real-Time Games , 2015, IJCAI.

[16]  David W. Aha,et al.  CaMeL: Learning Method Preconditions for HTN Planning , 2002, AIPS.

[17]  Ruck Thawonmas,et al.  Application of Monte-Carlo tree search in a fighting game AI , 2016, 2016 IEEE 5th Global Conference on Consumer Electronics.