Guiding User Adaptation in Serious Games

The complexity of training situations requires teaching different skills to different trainees and in different situations. Current approaches of dynamic difficulty adjustment in games use a purely centralized approach for this adaptation. This becomes impractical if the complexity increases and especially if past actions of the non player characters need to be taken into account. Agents are increasingly used in serious game implementations as a means to reduce complexity and increase believability. Agents can be designed to adapt their behavior to different user requirements and situations. However, this leads to situations in which the lack of coordination between the agents makes it practically impossible to follow the intended storyline of the game and select suitable difficulties for the trainee. In this paper, we present a monitoring system for the coordination of the characters actions and adaptation to guarantee appropriate combinations of character actions that ensure the preservation of the storyline. In particular we propose an architecture for game design that introduces a monitoring module to check the development of user skills and direct coordinated agent adaptation. That is, agents propose possible courses of action that are fitting their role and context, and the monitor module uses this information together with its evaluation of user level and storyline progress to determine the most suitable combination of proposals.

[1]  Steve Rabin,et al.  AI Game Programming Wisdom , 2002 .

[2]  Dave Moffat,et al.  Personality Parameters and Programs , 1997, Creating Personalities for Synthetic Actors.

[3]  Eric O. Postma,et al.  Adaptive game AI with dynamic scripting , 2006, Machine Learning.

[4]  David L. Westbrook,et al.  Intelligent modeling of the user in interactive entertainment , 2002 .

[5]  Robert Trappl,et al.  Creating Personalities for Synthetic Actors , 1997, Lecture Notes in Computer Science.

[6]  Jenova Chen,et al.  Flow in games (and everything else) , 2007, CACM.

[7]  Frank Dignum,et al.  Adaptive Serious Games Using Agent Organizations , 2009, AGS.

[8]  Virginia Dignum,et al.  OperettA: a prototype tool for the design, analysis and development of multi-agent organizations , 2008, AAMAS.

[9]  Frank Dignum,et al.  Adaptive reinforcement learning agents in RTS games , 2008 .

[10]  Torbjörn Lager,et al.  DEAL: dialogue management in SCXML for believable game characters , 2007, Future Play.

[11]  Mehdi Dastani,et al.  2APL: a practical agent programming language , 2008, Autonomous Agents and Multi-Agent Systems.

[12]  Gnana Bharathy,et al.  Human Behavior Models for Agents in Simulators and Games: Part II: Gamebot Engineering with PMFserv , 2006, Presence: Teleoperators & Virtual Environments.

[13]  Sean Luke,et al.  Cooperative Multi-Agent Learning: The State of the Art , 2005, Autonomous Agents and Multi-Agent Systems.

[14]  Tuomas Sandholm,et al.  Algorithm for optimal winner determination in combinatorial auctions , 2002, Artif. Intell..

[15]  Frank Dignum Agents for games and simulations , 2011, Autonomous Agents and Multi-Agent Systems.

[16]  Michael Lees,et al.  Agents, games and HLA , 2006, Simul. Model. Pract. Theory.

[17]  Alex Sandro Gomes,et al.  Dynamic Game Balancing: An Evaluation of User Satisfaction , 2006, AIIDE.

[18]  John P. Lewis,et al.  The DEFACTO System: Training Tool for Incident Commanders , 2005, AAAI.

[19]  Olivier Boissier,et al.  S-MOISE+: A Middleware for Developing Organised Multi-agent Systems , 2005, AAMAS Workshops.

[20]  Brian Magerko,et al.  AI Characters and Directors for Interactive Computer Games , 2004, AAAI.

[21]  M. V. Dignum,et al.  A Model for Organizational Interaction: based on Agents, founded in Logic , 2000 .

[22]  Robin Hunicke,et al.  AI for Dynamic Difficulty Adjustment in Games , 2004 .

[23]  Vincent Corruble,et al.  Extending Reinforcement Learning to Provide Dynamic Game Balancing , 2005 .