C-LEARN: Learning geometric constraints from demonstrations for multi-step manipulation in shared autonomy

Learning from demonstrations has been shown to be a successful method for non-experts to teach manipulation tasks to robots. These methods typically build generative models from demonstrations and then use regression to reproduce skills. However, this approach has limitations to capture hard geometric constraints imposed by the task. On the other hand, while sampling and optimization-based motion planners exist that reason about geometric constraints, these are typically carefully hand-crafted by an expert. To address this technical gap, we contribute with C-LEARN, a method that learns multi-step manipulation tasks from demonstrations as a sequence of keyframes and a set of geometric constraints. The system builds a knowledge base for reaching and grasping objects, which is then leveraged to learn multi-step tasks from a single demonstration. C-LEARN supports multi-step tasks with multiple end effectors; reasons about SE(3) volumetric and CAD constraints, such as the need for two axes to be parallel; and offers a principled way to transfer skills between robots with different kinematics. We embed the execution of the learned tasks within a shared autonomy framework, and evaluate our approach by analyzing the success rate when performing physical tasks with a dual-arm Optimas robot, comparing the contribution of different constraints models, and demonstrating the ability of C-LEARN to transfer learned tasks by performing them with a legged dual-arm Atlas robot in simulation.

[1]  Ron Alterovitz,et al.  Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps , 2015, IEEE Transactions on Automation Science and Engineering.

[2]  Andrew T. Irish,et al.  Trajectory Learning for Robot Programming by Demonstration Using Hidden Markov Model and Dynamic Time Warping , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[3]  Maya Cakmak,et al.  Keyframe-based Learning from Demonstration , 2012, Int. J. Soc. Robotics.

[4]  Yiannis Demiris,et al.  Towards One Shot Learning by imitation for humanoid robots , 2010, 2010 IEEE International Conference on Robotics and Automation.

[5]  Alois Knoll,et al.  Constraint-based task programming with CAD semantics: From intuitive specification to real-time control , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[6]  Brett Browning,et al.  A survey of robot learning from demonstration , 2009, Robotics Auton. Syst..

[7]  Michael A. Saunders,et al.  SNOPT: An SQP Algorithm for Large-Scale Constrained Optimization , 2002, SIAM J. Optim..

[8]  Scott Kuindersma,et al.  Director: A User Interface Designed for Robot Operation with Shared Autonomy , 2017, J. Field Robotics.

[9]  Matthew R. Walter,et al.  Learning Articulated Motions From Visual Demonstration , 2014, Robotics: Science and Systems.

[10]  Richard A. Volz,et al.  Teleautonomous systems: projecting and coordinating intelligent action at a distance , 1990, IEEE Trans. Robotics Autom..

[11]  Darwin G. Caldwell,et al.  A task-parameterized probabilistic model with minimal intervention control , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[12]  Christopher G. Atkeson,et al.  Human‐in‐the‐loop Control of a Humanoid Robot for Disaster Response: A Report from the DARPA Robotics Challenge Trials , 2015, J. Field Robotics.

[13]  Twan Koolen,et al.  Team IHMC's Lessons Learned from the DARPA Robotics Challenge Trials , 2015, J. Field Robotics.

[14]  Julie A. Shah,et al.  Fast target prediction of human reaching motion for cooperative human-robot manipulation tasks using time series classification , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[15]  Scott Niekum,et al.  Learning and generalization of complex tasks from unstructured demonstrations , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Yoshihiko Nakamura,et al.  Learning Robot Skills Through Motion Segmentation and Constraints Extraction , 2013 .

[17]  Thomas B. Sheridan,et al.  Telerobotics, Automation, and Human Supervisory Control , 2003 .

[18]  Scott Kuindersma,et al.  An Architecture for Online Affordance‐based Perception and Whole‐body Planning , 2015, J. Field Robotics.

[19]  Pieter Abbeel,et al.  Finding Locally Optimal, Collision-Free Trajectories with Sequential Convex Optimization , 2013, Robotics: Science and Systems.

[20]  Dominik Henrich,et al.  One-shot robot programming by demonstration by adapting motion segments , 2014, 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014).

[21]  三嶋 博之 The theory of affordances , 2008 .

[22]  Pietro Perona,et al.  One-shot learning of object categories , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[23]  Aude Billard,et al.  A probabilistic Programming by Demonstration framework handling constraints in joint space and task space , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[24]  Maya Cakmak,et al.  Robot Programming by Demonstration with Interactive Action Visualizations , 2014, Robotics: Science and Systems.

[25]  Robin R. Murphy,et al.  Human-robot interaction in rescue robotics , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[26]  Maxim Likhachev,et al.  Learning to plan for constrained manipulation from demonstrations , 2016, Auton. Robots.

[27]  Darwin G. Caldwell,et al.  Learning bimanual end-effector poses from demonstrations using task-parameterized dynamical systems , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[28]  Dmitry Berenson,et al.  Learning Object Orientation Constraints and Guiding Constraints for Narrow Passages from One Demonstration , 2016, ISER.

[29]  Andrea Lockerd Thomaz,et al.  An evaluation of GUI and kinesthetic teaching methods for constrained-keyframe skills , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).