Learning from Demonstration for Distributed, Encapsulated Evolution of Autonomous Outdoor Robots
暂无分享,去创建一个
In learning from demonstration (LfD) a human trainer demonstrates desired behaviors to a robotic agent, creating a training set that the agent can learn from. LfD allows non-programmers to easily and naturally train robotic agents to perform specific tasks. However, to date most LfD has focused on single robot, single trainer paradigms leading to bottlenecks in both the time required to demonstrate tasks and the time required to learn behaviors. A previously untested, approach to addressing these limitations is to use distributed LfD with a distributed, evolutionary algorithm. Distributed on-board learning is a model for robust real world learning without the need for a central computer. In the distributed LfD system presented here multiple trainers train multiple robots on different, but related, tasks in parallel and each robot runs its own on-board evolutionary algorithm. The robots share the training data, reducing the total time required for demonstrations, and exchange promising individuals as in typical island models. These experiments compare robotic performance on a task after distributing the behaviors or the simple demonstrations to performance using a non-distributed LfD model receiving complex demonstrations. Our results show a strong improvement to behavior when using distributed simple demonstrations.
[1] Terence Soule,et al. COTSBots: computationally powerful, low-cost robots for Computer Science curriculums , 2011 .
[2] Manuela M. Veloso,et al. Confidence-Based Multi-Robot Learning from Demonstration , 2010, Int. J. Soc. Robotics.
[3] Terence Soule,et al. A Practical Platform for On-Line Genetic Programming for Robotics , 2013 .