Hierarchical Reinforcement Learning with Deictic Representation in a Computer Game

Computer games are challenging test beds for machine learning research. Without applying abstraction and generalization techniques, many traditional machine learning techniques, such as reinforcement learning, will fail to learn efficiently. In this paper we examine extensions of reinforcement learning that scale to the complexity of computer games. In particular we look at hierarchical reinforcement learning applied to a learning task in a real time-strategy computer game. Moreover, we employ a deictic state representation that reduces the complexity compared to a propositional representation and allows the adaptive agent to learn a generalized policy, i.e., it is capable of transferring knowledge to unseen task instances. We found that hierarchical reinforcement learning significantly outperforms flat reinforcement learning for our task.