A Relational Representation for Generalized Knowledge in Robotic Tasks

In this paper, a novel representation is proposed in which experience is summarized by a wealth of control and perception primitives that can be mined to learn combinations of which features are most predictive of task success. Exploiting the inherent relational structure of these primitives and the dependencies between them presents a powerful and widely-applicable new approach in the robotics community. These dependencies are represented as links in a relational dependency network (RDN), and capture information about how a robot’s actions and observations affect each other when used together in the full context of the task. For example, a RDN trained as an expert to “pick up” things will represent the best way to reach to an object, knowing that it plans on grasping that object later. Such experts provide information which might not be obvious to a programmer ahead of time, and can be consulted to allow the robot to achieve higher levels of task performance. Furthermore, it seems possible that new, more complex RDNs could be trained by learning the dependencies between existing RDNs. As a result, this paper proposes a hierarchical way of organizing complex behaviors in a principled way.