Automatic Critical Mechanic Discovery in Video Games

We present a new method of automatic critical mechanic discovery for video games using a combination of game description parsing and playtrace information. This method is applied to several games within the General Video Game Artificial Intelligence (GVG-AI) framework. In a user study, human-identified mechanics are compared against system-identified critical mechanics to verify alignment between humans and the system. The results of the study demonstrate that the new method is able to match humans with higher consistency than baseline. Our system is further validated by comparing MCTS agents augmented with critical mechanics and vanilla MCTS agents on 4 games from GVG-AI. Our new playtrace method shows a significant performance improvement over the baseline for all 4 tested games. The proposed method also shows either matched or improved performance over the old method, demonstrating that playtrace information is responsible for more complete critical mechanic discovery.

[1]  Simon Colton,et al.  Mechanic Miner: Reflection-Driven Game Mechanic Discovery and Level Design , 2013, EvoApplications.

[2]  Joelle Pineau,et al.  Information Gathering and Reward Exploitation of Subgoals for POMDPs , 2015, AAAI.

[3]  Julian Togelius,et al.  General Video Game AI: Competition, Challenges and Opportunities , 2016, AAAI.

[4]  Chris Martens,et al.  From Mechanics to Meaning , 2019, IEEE Transactions on Games.

[5]  Simon M. Lucas,et al.  A Survey of Monte Carlo Tree Search Methods , 2012, IEEE Transactions on Computational Intelligence and AI in Games.

[6]  Miguel Sicart,et al.  Defining Game Mechanics , 2008, Game Stud..

[7]  Manfred Huber,et al.  Autonomous Subgoal Discovery and Hierarchical Abstraction for Reinforcement Learning Using Monte Carlo Method , 2005, AAAI.

[8]  Andrew G. Barto,et al.  Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density , 2001, ICML.

[9]  Michael Cook,et al.  Hyperstate Space Graphs for Automated Game Analysis , 2019, 2019 IEEE Conference on Games (CoG).

[10]  Julian Togelius,et al.  Towards generating arcade game rules with VGDL , 2015, 2015 IEEE Conference on Computational Intelligence and Games (CIG).

[11]  Julian Togelius,et al.  Generating beginner heuristics for simple texas hold'em , 2018, GECCO.

[12]  Julian Togelius,et al.  Deep Reinforcement Learning for General Video Game AI , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).

[13]  Tomás Lozano-Pérez,et al.  A Framework for Multiple-Instance Learning , 1997, NIPS.

[14]  Julian Togelius,et al.  General Video Game AI: A Multitrack Framework for Evaluating Agents, Games, and Content Generation Algorithms , 2018, IEEE Transactions on Games.

[15]  Julian Togelius,et al.  General Video Game Level Generation , 2016, GECCO.

[16]  Julian Togelius,et al.  Matching Games and Algorithms for General Video Game Playing , 2021, AIIDE.

[17]  Julian Togelius,et al.  General video game rule generation , 2017, 2017 IEEE Conference on Computational Intelligence and Games (CIG).

[18]  Simon M. Lucas,et al.  General Video Game for 2 players: Framework and competition , 2016, 2016 8th Computer Science and Electronic Engineering (CEEC).

[19]  Julian Togelius,et al.  Intentional computational level design , 2019, GECCO.

[21]  Chris Martens,et al.  Deriving quests from open world mechanics , 2017, FDG.

[22]  Satinder Singh Transfer of Learning by Composing Solutions of Elemental Sequential Tasks , 1992, Mach. Learn..

[23]  Simon M. Lucas,et al.  Solving the Physical Traveling Salesman Problem: Tree Search and Macro Actions , 2014, IEEE Transactions on Computational Intelligence and AI in Games.

[24]  Julian Togelius,et al.  Automated Playtesting With Procedural Personas Through MCTS With Evolved Heuristics , 2018, IEEE Transactions on Games.

[25]  Julian Togelius,et al.  Monte Mario: platforming with MCTS , 2014, GECCO.

[26]  Marc Toussaint,et al.  Hierarchical Monte-Carlo Planning , 2015, AAAI.

[27]  Csaba Szepesvári,et al.  Bandit Based Monte-Carlo Planning , 2006, ECML.

[28]  Julian Togelius,et al.  Depth in Strategic Games , 2017, AAAI Workshops.

[29]  Julian Togelius,et al.  Towards a Video Game Description Language , 2013, Artificial and Computational Intelligence in Games.

[30]  Julian Togelius,et al.  Generating heuristics for novice players , 2016, 2016 IEEE Conference on Computational Intelligence and Games (CIG).

[31]  Jan Willemson,et al.  Improved Monte-Carlo Search , 2006 .

[32]  Julian Togelius,et al.  Generating Novice Heuristics for Post-Flop Poker , 2018, 2018 IEEE Conference on Computational Intelligence and Games (CIG).

[33]  Simon Colton,et al.  Ludus Ex Machina: Building A 3D Game Designer That Competes Alongside Humans , 2014, ICCC.

[34]  Julian Togelius,et al.  Generating levels that teach mechanics , 2018, FDG.

[35]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[36]  Rémi Coulom,et al.  Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search , 2006, Computers and Games.

[37]  Michael Mateas,et al.  Automatic mapping of NES games with mappy , 2017, FDG.

[38]  Julian Togelius,et al.  AtDELFI: automatically designing legible, full instructions for games , 2018, FDG.

[39]  Stuart J. Russell,et al.  Markovian State and Action Abstractions for MDPs via Hierarchical MCTS , 2016, IJCAI.

[40]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[41]  Julian Togelius,et al.  "Press Space to Fire": Automatic Video Game Tutorial Generation , 2017, AIIDE Workshops.