Learning from feedback on actions past and intended
暂无分享,去创建一个
Robotic learning promises to eventually provide great societal benefits. In contrast to pure trial-and-error learning, human instruction has at least two benefits: (1) Human teaching can lead to much faster learning. For instance, humans can model the delayed outcome of a behavior and give feedback immediately, unambiguously informing the robot of the quality of its recent action. (2) Human instruction can serve to define a task objective, empowering end-users that lack programming skills to customize behavior. The tamer framework [3, 2] was developed to provide a learning mechanism for a specific, psychologically grounded [1] form of teaching—through signals of reward and punishment. tamer breaks the process of interactively learning behaviors from live human reward into three modules: credit assignment, where delayed human reward is applied appropriately to recent events; regression on experienced events and their consequential credited reward to create a predictive model for future reward; and action selection using the model of human reward. tamer differs from traditional reinforcement learning (RL) algorithms—generally powerful algorithms that are intuitive but ultimately ill-suited for learning from human reward—in multiple ways. For instance, human reward is stochastically delayed from the event that prompted it, and tamer acknowledges this delay, absent in traditional reinforcement learning, and adjusts for it. And importantly, human trainers consider the long-term effects of actions, making each reward a complete judgment on the quality of recent actions; therefore, predictions of near-term human reward are analogous to estimates of expected long-term reward in reinforcement learning, simplifying action selection to choosing the action with the highest expected human reward. On multiple tasks, tamer agents have been shown to learn more quickly—sometimes dramatically so—than counterparts that learn from a predefined evaluation function instead of human interaction. Further, the tamer framework
[1] Andrea Lockerd Thomaz,et al. Reinforcement Learning with Human Teachers: Evidence of Feedback and Guidance with Implications for Learning Performance , 2006, AAAI.
[2] Peter Stone,et al. Interactively shaping agents via human reinforcement: the TAMER framework , 2009, K-CAP '09.
[3] P. Stone,et al. TAMER: Training an Agent Manually via Evaluative Reinforcement , 2008, 2008 7th IEEE International Conference on Development and Learning.
[4] M. Bouton. Learning and Behavior: A Contemporary Synthesis , 2006 .