A Real-time Strategy Agent Framework and Strategy Classifier for Computer Generated Forces

Abstract : This research effort is concerned with the advancement of computer generated forces AI for Department of Defense (DoD) military training and education. The vision of this work is agents capable of perceiving and intelligently responding to opponent strategies in real-time. Our research goal is to lay the foundations for such an agent. Six research objectives are defined: 1) Formulate a strategy definition schema effective in defining a range of RTS strategies. 2) Create eight strategy definitions via the schema. 3) Design a real-time agent framework that plays the game according to the given strategy definition. 4) Generate an RTS data set. 5) Create an accurate and fast executing strategy classifier. 6) Find the best counterstrategies for each strategy definition. The agent framework is used to play the eight strategies against each other and generate a data set of game observations. To classify the data, we first perform feature reduction using principal component analysis or linear discriminant analysis. Two classifier techniques are employed, k-means clustering with k-nearest neighbor and support vector machine. The resulting classifier is 94.1% accurate with an average classification execution speed of 7.14 us. Our research effort has successfully laid the foundations for a dynamic strategy agent.

[1]  Alan Fern,et al.  UCT for Tactical Assault Planning in Real-Time Strategy Games , 2009, IJCAI.

[2]  Eric O. Postma,et al.  Adaptive game AI with dynamic scripting , 2006, Machine Learning.

[3]  David W. Aha,et al.  Learning to Win: Case-Based Plan Selection in a Real-Time Strategy Game , 2005, Künstliche Intell..

[4]  Hassan Gomaa,et al.  Designing concurrent, distributed, and real-time applications with UML , 2000, ICSE.

[5]  Gilbert L. Peterson,et al.  Determining Solution Space Characteristics for Real-Time Strategy Games and Characterizing Winning Strategies , 2011, Int. J. Comput. Games Technol..

[6]  Armin Stahl,et al.  Learning by Observing: Case-Based Decision Making in Complex Strategy Games , 2008, KI.

[7]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[8]  Mauricio G. C. Resende,et al.  Designing and reporting on computational experiments with heuristic methods , 1995, J. Heuristics.

[9]  Pierre Bessière,et al.  A Bayesian Model for Plan Recognition in RTS Games Applied to StarCraft , 2011, AIIDE.

[10]  Michael Mateas,et al.  Conceptual Neighborhoods for Retrieval in Case-Based Reasoning , 2009, ICCBR.

[11]  J. Miles,et al.  University of Copenhagen, , 1997 .

[12]  Pieter Spronck,et al.  Phase-dependent Evaluation in RTS Games , 2007 .

[13]  H. Jaap van den Herik,et al.  Opponent modelling for case-based adaptive game AI , 2009, Entertain. Comput..

[14]  Matthew Caffrey Toward a History-Based Doctrine for Wargaming , 2000 .

[15]  John E. Laird,et al.  Using a Computer Game to Develop Advanced AI , 2001, Computer.

[16]  David G. Stork,et al.  Pattern classification and scene analysis , 1974, A Wiley-Interscience publication.

[17]  Michael Mateas,et al.  Case-Based Reasoning for Build Order in Real-Time Strategy Games , 2009, AIIDE.

[18]  Alex M. Andrew,et al.  ROBOT LEARNING, edited by Jonathan H. Connell and Sridhar Mahadevan, Kluwer, Boston, 1993/1997, xii+240 pp., ISBN 0-7923-9365-1 (Hardback, 218.00 Guilders, $120.00, £89.95). , 1999, Robotica (Cambridge. Print).

[19]  Kenneth H. Rosen,et al.  Discrete Mathematics and its applications , 2000 .

[20]  David W. Aha,et al.  Automatically Generating Game Tactics through Evolutionary Learning , 2006, AI Mag..

[21]  David W. Aha,et al.  Improving Offensive Performance Through Opponent Modeling , 2009, AIIDE.

[22]  John E. Laird,et al.  Human-Level AI's Killer Application: Interactive Computer Games , 2000, AI Mag..

[23]  H. Jaap van den Herik,et al.  Rapid and Reliable Adaptation of Video Game AI , 2009, IEEE Transactions on Computational Intelligence and AI in Games.

[24]  Pieter Spronck,et al.  Opponent Modeling in Real-Time Strategy Games , 2007, GAMEON.

[25]  Michael Buro,et al.  Call for AI Research in RTS Games , 2004 .

[26]  David W. Aha,et al.  Automatically Acquiring Domain Knowledge For Adaptive Game AI Using Evolutionary Learning , 2005, AAAI.

[27]  Kurt Weissgerber,et al.  Developing an Effective and Efficient Real Time Strategy Agent for Use as a Computer Generated Force , 2012 .

[28]  Gabriel Synnaeve,et al.  A Bayesian model for opening prediction in RTS games with application to StarCraft , 2011, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11).

[29]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[30]  Gita Reese Sukthankar,et al.  A Real-Time Opponent Modeling System for Rush Football , 2011, IJCAI.

[31]  Gunhild Waldemar,et al.  Denmark , 2007, Practical Neurology.

[32]  Marc J. V. Ponsen,et al.  Improving Adaptive Game Ai with Evolutionary Learning , 2004 .

[33]  Jonathan Schaeffer,et al.  Monte Carlo Planning in RTS Games , 2005, CIG.

[34]  Ashutosh Kumar Singh,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2010 .

[35]  Guido van Rossum,et al.  Python Programming Language , 2007, USENIX Annual Technical Conference.

[36]  Gabriel Synnaeve,et al.  A Bayesian model for RTS units control applied to StarCraft , 2011, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11).

[37]  Michael Mateas,et al.  An Integrated Agent for Playing Real-Time Strategy Games , 2008, AAAI.

[38]  Pieter Spronck,et al.  Dynamic formations in real-time strategy games , 2008, 2008 IEEE Symposium On Computational Intelligence and Games.