Symbol generation and feature selection for reinforcement learning agents using affordances and U-Trees

One of the challenges for artificial agents is managing the complexity of their environment and task domain as they learn increasingly difficult tasks. This is especially true of agents that are grounded in the physical world, which contains a vast number of features and potentially very complex dynamics. A scalable solution to this problem in terms of forming, managing, and re-using compact, grounded representations in order to address the state explosion problem is thus a prerequisite of physically grounded, agent-based systems that can apply their past experience to new tasks and communicate that experience with other agents. To achieve this, it is essential that agents can form conceptual features that are relevant for and re-usable in their task domain without outside intervention and that these agents can effectively focus their attention on only the relevant features and concepts for the task at hand. This paper presents a framework for managing state complexity by automatically constructing abstract, symbolic features which encode important, task and domain-relevant properties and partition the raw feature space such that the agent need only consider a compressed view of the environment when learning new tasks. To exploit this, the framework during learning of new tasks uses U-Trees to construct minimal feature sets and thus compact state representations for these new tasks, allowing for potentially significant improvements in learning times.