A framework for the adaptive transfer of robot skill knowledge using reinforcement learning agents

A framework, called skill advice guided exploration (SAGE), for the adaptive transfer of robot skill knowledge using reinforcement learning (RL) agents is presented. A skill is viewed as a reactive policy which maps world states to agent actions. It may be acquired via learning or it may be hand-coded by the designer. The SAGE framework allows multiple, possibly conflicting, sources of knowledge to be incorporated simultaneously. An abstraction for knowledge in an RL system, called advice, is introduced. The advice abstraction permits the transfer of information between RL agents with differing internal representations. A SAGE-based system can learn to disregard misleading advice. The potential of this methodology is demonstrated on a set of discrete learning tasks. Results show that SAGE-based systems can benefit from relevant information and that incorrect information does not prevent learning of the task solution. The benefits, limitations, and possible extensions of this work are discussed.