The goal of this research is to enable robots to learn new things from everyday people. For years, the AI and Robotics community has sought to enable robots to efficiently learn new skills from a knowledgeable human trainer, and prior work has focused on several important technical problems. This vast amount of research in the field of robot Learning by Demonstration has by and large only been evaluated with expert humans, typically the system’s designer. Thus, neglecting a key point that this interaction takes place within a social structure that can guide and constrain the learning problem. We believe that addressing this point will be essential for developing systems that can learn from everyday people that are not experts in Machine Learning or Robotics. Our work focuses on new research questions involved in letting robots learn from everyday human partners (e.g., What kind of input do people want to provide a machine learner? How does their mental model of the learning process affect this input? What interfaces and interaction mechanisms can help people provide better input from a machine learning perspective?) Often our research begins with an investigation into the feasibility of a particular machine learning interaction, which leads to a series of research questions around re-designing both the interaction and the algorithm to better suit learning with end-users. We believe this equal focus on both the Machine Learning and the HRI contributions are key to making progress toward the goal of machines learning from humans. In this abstract we briefly overview four different projects that highlight our HRI approach to the problem of Learning from Demonstration.
[1]
Mark Steedman,et al.
Object-Action Complexes: Grounded abstractions of sensory-motor processes
,
2011,
Robotics Auton. Syst..
[2]
Manuel Lopes,et al.
Learning Object Affordances: From Sensory--Motor Coordination to Imitation
,
2008,
IEEE Transactions on Robotics.
[3]
Maya Cakmak,et al.
Learning about objects with human teachers
,
2009,
2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[4]
Maya Cakmak,et al.
Transparent active learning for robots
,
2010,
2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[5]
Maya Cakmak,et al.
Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective
,
2012,
2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
[6]
Maya Cakmak,et al.
Towards grounding concepts for transfer in goal learning from demonstration
,
2011,
2011 IEEE International Conference on Development and Learning (ICDL).