"It's Unwieldy and It Takes a Lot of Time." Challenges and Opportunities for Creating Agents in Commercial Games

Game agents such as opponents, non-player characters, and teammates are central to player experiences in many modern games. As the landscape of AI techniques used in the games industry evolves to adopt machine learning (ML) more widely, it is vital that the research community learn from the best practices cultivated within the industry over decades creating agents. However, although commercial game agent creation pipelines are more mature than those based on ML, opportunities for improvement still abound. As a foundation for shared progress identifying research opportunities between researchers and practitioners, we interviewed seventeen game agent creators from AAA studios, indie studios, and industrial research labs about the challenges they experienced with their professional workflows. Our study revealed several open challenges ranging from design to implementation and evaluation. We compare with literature from the research community that address the challenges identified and conclude by highlighting promising directions for future research supporting agent creation in the games industry.

[1]  Julian Togelius,et al.  Procedural Content Generation: From Automatically Generating Game Levels to Increasing Generality in Machine Learning , 2019, ArXiv.

[2]  Mohamed Medhat Gaber,et al.  Imitation Learning , 2017, ACM Comput. Surv..

[3]  Xiaodong Li,et al.  Challenging AI: Evaluating the Effect of MCTS-Driven Dynamic Difficulty Adjustment on Player Enjoyment , 2019, ACSW.

[4]  Julian Togelius,et al.  Automated Playtesting With Procedural Personas Through MCTS With Evolved Heuristics , 2018, IEEE Transactions on Games.

[5]  Yu Zhang,et al.  A Survey on Multi-Task Learning , 2017, IEEE Transactions on Knowledge and Data Engineering.

[6]  Mark O. Riedl,et al.  Improving Deep Reinforcement Learning in Minecraft with Action Advice , 2019, AIIDE.

[7]  Salima Hassas,et al.  A survey on intrinsic motivation in reinforcement learning , 2019, ArXiv.

[8]  Katherine Isbister Better Game Characters by Design - A Psychological Approach , 2006, The Morgan Kaufmann series in interactive 3D technology.

[9]  Jakub W. Pachocki,et al.  Dota 2 with Large Scale Deep Reinforcement Learning , 2019, ArXiv.

[10]  Peter I. Cowling,et al.  Memory Bounded Monte Carlo Tree Search , 2017, AIIDE.

[11]  Mark O. Riedl,et al.  Automated rationale generation: a technique for explainable AI and its effects on human perceptions , 2019, IUI.

[12]  Tom Minka,et al.  TrueSkillTM: A Bayesian Skill Rating System , 2006, NIPS.

[13]  Sebastian Risi,et al.  A Games Industry Perspective on Recent Game AI Developments , 2020, KI - Künstliche Intelligenz.

[14]  Stefan Wermter,et al.  Continual Lifelong Learning with Neural Networks: A Review , 2019, Neural Networks.

[15]  Tsuyoshi Murata,et al.  {m , 1934, ACML.

[16]  Qiang Yang,et al.  A Survey on Multi-Task Learning , 2017, IEEE Transactions on Knowledge and Data Engineering.

[17]  Thomas Hofmann,et al.  TrueSkill™: A Bayesian Skill Rating System , 2007 .

[18]  John B. Shoven,et al.  I , Edinburgh Medical and Surgical Journal.

[19]  Michael Buro,et al.  Improving RTS Game AI by Supervised Policy Learning, Tactical Search, and Deep Reinforcement Learning , 2019, IEEE Computational Intelligence Magazine.

[20]  Kaleigh Clary,et al.  Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning , 2020, ICLR.

[21]  Robin Hunicke,et al.  The case for dynamic difficulty adjustment in games , 2005, ACE '05.

[23]  Wojciech M. Czarnecki,et al.  Grandmaster level in StarCraft II using multi-agent reinforcement learning , 2019, Nature.

[24]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[25]  Michael I. Jordan,et al.  RLlib: Abstractions for Distributed Reinforcement Learning , 2017, ICML.

[26]  Richard J. Duro,et al.  Open-Ended Learning: A Conceptual Framework Based on Representational Redescription , 2018, Front. Neurorobot..

[27]  Sanaz Mostaghim,et al.  A Hybrid Approach to Planning and Execution in Dynamic Environments Through Hierarchical Task Networks and Behavior Trees , 2018, AIIDE.

[28]  Julian Togelius,et al.  General Video Game AI: Competition, Challenges and Opportunities , 2016, AAAI.

[29]  Christopher Joseph Pal,et al.  Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents , 2019, ICLR.

[30]  Magy Seif El-Nasr,et al.  Game Analytics , 2013, Springer London.

[31]  Steven Rabin,et al.  Game AI Pro 2: Collected Wisdom of Game AI Professionals , 2013 .

[32]  Peter Stone,et al.  Transfer Learning for Reinforcement Learning Domains: A Survey , 2009, J. Mach. Learn. Res..

[33]  Danna Zhou,et al.  d. , 1934, Microbial pathogenesis.

[34]  Joelle Pineau,et al.  Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program) , 2020, J. Mach. Learn. Res..

[35]  David Silver,et al.  Meta-Gradient Reinforcement Learning , 2018, NeurIPS.

[36]  G. G. Stokes "J." , 1890, The New Yale Book of Quotations.

[37]  Yujing Hu,et al.  Mastering Basketball With Deep Reinforcement Learning: An Integrated Curriculum Training Approach , 2020, AAMAS.

[38]  Christos H. Papadimitriou,et al.  α-Rank: Multi-Agent Evaluation by Evolution , 2019, Scientific Reports.

[39]  John F. Canny,et al.  Measuring the Reliability of Reinforcement Learning Algorithms , 2019, ICLR.

[40]  Taehoon Kim,et al.  Quantifying Generalization in Reinforcement Learning , 2018, ICML.

[41]  Moshe Dor,et al.  אבן, and: Stone , 2017 .

[42]  V. Braun,et al.  Using thematic analysis in psychology , 2006 .

[43]  M. Csíkszentmihályi,et al.  Optimal experience: Psychological studies of flow in consciousness. , 1988 .

[44]  Trevor Darrell,et al.  Modular Architecture for StarCraft II with Deep Reinforcement Learning , 2018, AIIDE.

[45]  Casey O'Donnell Developer's Dilemma: The Secret World of Videogame Creators , 2014 .