Unsupervised Hierarchical Clustering of Build Orders in a Real-Time Strategy Game

Currently, no artificial intelligence (AI) agent can beat a professional real-time strategy game player. Lack of effective opponent modeling limits an AI agent’s ability to adapt to new opponents or strategies. Opponent models provide an understanding of the opponent’s strategy and potential future actions. To date, opponent models have relied on handcrafted features and expert-defined strategies, which restricts AI agent opponent models to previously known and easily understood strategies. In this paper, we propose size-first hierarchic clustering to cluster players that employ similar strategies in a real-time strategy (RTS) game. We employ an unsupervised hierarchal clustering algorithm to cluster game build orders into strategy groups. To eliminate small outlying clusters, the hierarchal clustering algorithm was modified to first group the smallest cluster with its closest neighbor, i.e., size-first hierarchal clustering. In our analysis, we employ a previously developed dataset based on StarCraft: Brood War game replays. In our proposed approach, principal component analysis (PCA) is used to visualize player clusters, and the obtained PCA graphs show that the clusters are qualitatively distinct. We also demonstrate that a game’s outcome is marginally affected by both players’ clusters. In addition, we show that the opponent’s faction can be determined based on a player’s transition between clusters overtime. The novelty of our analysis is the lack of expert-defined features and an automated stopping condition to determine the appropriate number of clusters. Thus, the proposed approach is bias-free and applicable to any StarCraft-like RTS game.

[1]  Thomas G. Dietterich,et al.  Learning Probabilistic Behavior Models in Real-Time Strategy Games , 2011, AIIDE.

[2]  P. Rousseeuw Silhouettes: a graphical aid to the interpretation and validation of cluster analysis , 1987 .

[3]  Michael Mateas,et al.  A data mining approach to strategy prediction , 2009, 2009 IEEE Symposium on Computational Intelligence and Games.

[4]  Pierre Bessière,et al.  A Bayesian Model for Plan Recognition in RTS Games Applied to StarCraft , 2011, AIIDE.

[5]  Arnav Jhala,et al.  Opponent state modeling in RTS games with limited information using Markov random fields , 2014, 2014 IEEE Conference on Computational Intelligence and Games.

[6]  Santiago Ontañón,et al.  A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft , 2013, IEEE Transactions on Computational Intelligence and AI in Games.

[7]  Michal Certický,et al.  Predicting Opponent's Production in Real-Time Strategy Games With Answer Set Programming , 2016, IEEE Transactions on Computational Intelligence and AI in Games.

[8]  Chuen-Tsai Sun,et al.  Building a player strategy model by analyzing replays of real-time strategy games , 2008, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence).

[9]  Luiz Chaimowicz,et al.  Rock, Paper, StarCraft: Strategy Selection in Real-Time Strategy Games , 2016, AIIDE.

[10]  Georgios N. Yannakakis,et al.  Player modeling using self-organization in Tomb Raider: Underworld , 2009, 2009 IEEE Symposium on Computational Intelligence and Games.

[11]  Kazuhito Murakami,et al.  Analyzing and Learning an Opponent's Strategies in the RoboCup Small Size League , 2013, RoboCup.

[12]  Pierre Bessière,et al.  A Dataset for StarCraft AI & an Example of Armies Clustering , 2012, ArXiv.

[13]  Christian Bauckhage,et al.  A comparison of methods for player clustering via behavioral telemetry , 2013, FDG.

[14]  Victor Ion Munteanu,et al.  A Survey of Adaptive Game AI: Considerations for Cloud Deployment , 2013, IDC.

[15]  K. J. Levy Large-sample pair-wise comparisons involving correlations, proportions, or variances. , 1975 .

[16]  Thomas G. Dietterich,et al.  Inferring Strategies from Limited Reconnaissance in Real-time Strategy Games , 2012, UAI.

[17]  José Francisco Martínez Trinidad,et al.  Mining patterns for clustering on numerical datasets using unsupervised decision trees , 2015, Knowl. Based Syst..

[18]  H. Jaap van den Herik,et al.  Opponent modelling for case-based adaptive game AI , 2009, Entertain. Comput..

[19]  Benjamin Van Roy,et al.  Universal Reinforcement Learning , 2007, IEEE Transactions on Information Theory.