Let's Do the Time Warp Again: Human Action Assistance for Reinforcement Learning Agents

Reinforcement learning (RL) agents may take a long time to learn a policy for a complex task. One way to help the agent to convergence on a policy faster is by offering it some form of assistance from a teacher who already has some expertise on the same task. The teacher can be either a human or another computer agent, and they can provide assistance by controlling the reward, action selection, or state definition that the agent views. However, some forms of assistance might come more naturally from a human teacher than a computer teacher and vice versa. For instance, a challenge for human teachers in providing action selection is that because computers and human operate at different speed increments, it is difficult to translate what constitutes an action selection for a particular state in a human’s perception to that of the computer agent. In this paper, we introduce a system called Time Warp that allows a human teacher to provide action selection assistance to the agent during critical moments of the training for the RL agent. We find that Time Warp is able to help the agent develop a better policy in less time than an RL agent with no assistance and rivals the performance of computer teaching agents. Time Warp also is able to reach the results with only ten minutes of human training