Environment-adaptive interaction primitives for human-robot motor skill learning

In complex environments where robots are expected to co-operate with human partners, it is vital for the robot to consider properties of their collaborative activity in addition to the behavior of its partner. In this paper, we propose to learn such complex interactive skills by observing the demonstrations of a human-robot team with additional external attributes. We propose Environment-adaptive Interaction Primitives (EalPs) as an extension of Interaction Primitives. In cooperation tasks between human and robot with different environmental conditions, EalPs not only improve the predicted motor skills of robot within a brief observed human motion, but also obtain the generalization ability to adapt to new environmental conditions by learning the relationships between each condition and the corresponding motor skills from training samples. Our method is validated in the collaborative task of covering objects by plastic bag with a humanoid Baxter robot. To achieve the task successfully, the robot needs to coordinate itself to its partner while also considering information about the object to be covered.

[1]  Sandra Hirche,et al.  Feedback motion planning and learning from demonstration in physical robotic assistance: differences and synergies , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Jochen J. Steil,et al.  Interactive imitation learning of object movement skills , 2011, Autonomous Robots.

[3]  S. Chiba,et al.  Dynamic programming algorithm optimization for spoken word recognition , 1978 .

[4]  Jun Morimoto,et al.  Learning parametric dynamic movement primitives from multiple demonstrations , 2011, Neural Networks.

[5]  Jaime Valls Miró,et al.  Learning object, grasping and manipulation activities using hierarchical HMMs , 2014, Auton. Robots.

[6]  John T. Wen,et al.  Collaborative human-robot manipulation of highly deformable materials , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Oliver Kroemer,et al.  Interaction primitives for human-robot cooperation tasks , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Bilge Mutlu,et al.  Anticipatory robot control for efficient human-robot collaboration , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Darwin G. Caldwell,et al.  On improving the extrapolation capability of task-parameterized movement models , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Panos E. Trahanias,et al.  Learning from Demonstration facilitates Human-Robot Collaborative task execution , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Jun Nakanishi,et al.  Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors , 2013, Neural Computation.

[12]  Jun Morimoto,et al.  Task-Specific Generalization of Discrete and Periodic Dynamic Movement Primitives , 2010, IEEE Transactions on Robotics.