Hierarchical Reinforcement Learning in Computer Games

Hierarchical reinforcement learning is an increasingly popular research field. In hierarchical reinforcement learning the complete learning task is decomposed into smaller subtasks that are combined in a hierarchical network. The subtasks can then be learned independently. A hierarchical decomposition can potentially facilitate state abstractions (i.e., bring forth a reduction in state space complexity) and generalization (i.e., knowledge learned by a subtask can be transferred to other subtasks). In this paper we empirically evaluate the performance of two reinforcement learning algorithms, namely Q-learning and dynamic scripting, in both a flat (i.e., without task decomposition) and a hierarchical setting. Moreover, this paper provides a first step towards relational reinforcement learning by introducing a relational representation of the state features and actions. The learning task in this paper involves learning a generalized policy for a worker unit in a real time-strategy game called BATTLE OF SURVIVAL. We found that hierarchical reinforcement learning significantly outperforms flat reinforcement learning for our task.