Identifying patterns in combat that are predictive of success in MOBA games

Multiplayer Online Battle Arena (MOBA) games rely primarily on combat to determine the ultimate outcome of the game. Combat in these types of games is highly-dynamic and can be difficult for novice players to learn. Typically, mastery of combat requires that players obtain expert knowledge through practice, which can be difficult to concisely describe. In this paper, we present a data-driven approach for discovering patterns in combat tactics that are common among winning teams in MOBA games. We model combat as a sequence of graphs and extract patterns that predict successful outcomes not just of combat, but of the entire game. To identify those patterns, we attribute features to these graphs using well known graph metrics. These features allow us to describe, in meaningful terms, how different combat tactics contribute to team success. We also present an evaluation of our methodology on the popular MOBA game, DotA 2 (Defense of the Ancients 2). Experiments show that extracted patterns achieve an 80% prediction accuracy when testing on new game logs.

[1]  Pedro M. Domingos The RISE system: conquering without separating , 1994, Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94.

[2]  Shinichiro Terada "Moving Forward with Future Technologies: Opening a Platform for All" , 2012 .

[3]  Mark E. J. Newman,et al.  The Structure and Function of Complex Networks , 2003, SIAM Rev..

[4]  Hannes Werthner,et al.  Ranking factors of team success , 2013, WWW.

[5]  Alan Fern,et al.  UCT for Tactical Assault Planning in Real-Time Strategy Games , 2009, IJCAI.

[6]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[7]  Stuart J. Russell,et al.  Stratagus-playing Agents in Concurrent ALisp , 2022 .

[8]  Alan Fern,et al.  Learning and Transferring Roles in Multi-Agent Reinforcement , 2008 .

[9]  Eric O. Postma,et al.  On-line Adaptation of Game Opponent AI with Dynamic Scripting , 2004, Int. J. Intell. Games Simul..

[10]  Michael Buro,et al.  Adversarial Planning Through Strategy Simulation , 2007, 2007 IEEE Symposium on Computational Intelligence and Games.

[11]  Anu G. Bourgeois,et al.  Improving feature selection techniques for machine learning , 2007 .

[12]  David W. Aha,et al.  Learning to Win: Case-Based Plan Selection in a Real-Time Strategy Game , 2005, Künstliche Intell..

[13]  Padhraic Smyth,et al.  An Information Theoretic Approach to Rule Induction from Databases , 1992, IEEE Trans. Knowl. Data Eng..

[14]  Thomas G. Dietterich,et al.  Learning Probabilistic Behavior Models in Real-Time Strategy Games , 2011, AIIDE.

[15]  Nada Lavrac,et al.  The Multi-Purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains , 1986, AAAI.

[16]  Leonard M. Freeman,et al.  A set of measures of centrality based upon betweenness , 1977 .

[17]  Gabriel Synnaeve,et al.  A Bayesian model for opening prediction in RTS games with application to StarCraft , 2011, 2011 IEEE Conference on Computational Intelligence and Games (CIG'11).

[18]  William W. Cohen Fast Effective Rule Induction , 1995, ICML.

[19]  Hitoshi Mitomo,et al.  Leadership development through online gaming , 2012 .

[20]  Jonathan Schaeffer,et al.  Monte Carlo Planning in RTS Games , 2005, CIG.

[21]  D. Roberts,et al.  Extracting Human-readable Knowledge Rules in Complex Time-evolving Environments , 2013 .

[22]  Aric Hagberg,et al.  Exploring Network Structure, Dynamics, and Function using NetworkX , 2008, Proceedings of the Python in Science Conference.

[23]  Dorothea Heiss-Czedik,et al.  An Introduction to Genetic Algorithms. , 1997, Artificial Life.

[24]  Alberto Maria Segre,et al.  Programs for Machine Learning , 1994 .

[25]  Lluís A. Belanche Muñoz,et al.  Feature selection algorithms: a survey and experimental evaluation , 2002, 2002 IEEE International Conference on Data Mining, 2002. Proceedings..

[26]  Carlos Guestrin,et al.  Generalizing plans to new environments in relational MDPs , 2003, IJCAI 2003.

[27]  Johannes Fürnkranz,et al.  Incremental Reduced Error Pruning , 1994, ICML.

[28]  Peter Clark,et al.  Rule Induction with CN2: Some Recent Improvements , 1991, EWSL.

[29]  Peter Norvig,et al.  Artificial intelligence - a modern approach, 2nd Edition , 2003, Prentice Hall series in artificial intelligence.

[30]  Kathryn Fraughnaugh,et al.  Introduction to graph theory , 1973, Mathematical Gazette.

[31]  Michael Buro,et al.  Predicting Army Combat Outcomes in StarCraft , 2013, AIIDE.

[32]  U. Brandes A faster algorithm for betweenness centrality , 2001 .

[33]  Peter Norvig,et al.  Artificial Intelligence: A Modern Approach , 1995 .

[34]  David L. Roberts,et al.  Knowledge discovery for characterizing team success or failure in (A)RTS games , 2013, 2013 IEEE Conference on Computational Inteligence in Games (CIG).

[35]  Michael Mateas,et al.  A data mining approach to strategy prediction , 2009, 2009 IEEE Symposium on Computational Intelligence and Games.

[36]  L. Freeman Centrality in social networks conceptual clarification , 1978 .

[37]  J. Ross Quinlan,et al.  C4.5: Programs for Machine Learning , 1992 .

[38]  Marc J. V. Ponsen,et al.  Improving Adaptive Game Ai with Evolutionary Learning , 2004 .