Reflection in Action : Model-Based Self-Adaptation in Game Playing Agents

Computer war strategy games offer a challenging domain for AI techniques for learning because they involve multiple players, the world in the games is only partially observable and the state space is extremely large. Model-based reflection and self-adaptation is one method for learning in such a complex domain. In this method, the game-playing agent contains a model of its own reasoning processes. When the agent fails to win a game, it uses its self-model and (possibly) traces of its execution to analyze the failure and modify its knowledge and reasoning accordingly. In this paper, we describe an experimental investigation of modelbased reflection and self-adaptation for a specific task (defending a city) in a computer war strategy game called Civilization. Our results indicate that at least for limited tasks, model-based reflection enables effective learning, and further, when traces are used in conjunction with the model, the effectiveness of learning appears to increase with the size of the trace.