RL Generalization in a Theory of Mind Game Through a Sleep Metaphor (Student Abstract)

Training agents to learn efficiently in multi-agent environments can benefit from the explicit modelling of other agent’s beliefs, especially in complex limited-information games such as the Hanabi card game. However, generalization is also highly relevant to performance in these games, though model comparisons at large training timescales can be difficult. In this work, we address this by introducing a novel model trained using a sleep metaphor on a reduced complexity version of the Hanabi game. This sleep metaphor consists an altered training regiment, as well as an informationtheoretic constraint on the agent’s policy. Results from experimentation demonstrate improved performance through this sleep-metaphor method, and provide a promising motivation for using similar techniques in more complex methods that incorporate explicit models of other agent’s beliefs.