Untangling Dense Knots by Learning Task-Relevant Keypoints

Untangling ropes, wires, and cables is a challenging task for robots due to the high-dimensional configuration space, visual homogeneity, self-occlusions, and complex dynamics. We consider dense (tight) knots that lack space between self-intersections and present an iterative approach that uses learned geometric structure in configurations. We instantiate this into an algorithm, HULK: Hierarchical Untangling from Learned Keypoints, which combines learning-based perception with a geometric planner into a policy that guides a bilateral robot to untangle knots. To evaluate the policy, we perform experiments both in a novel simulation environment modelling cables with varied knot types and textures and in a physical system using the da Vinci surgical robot. We find that HULK is able to untangle cables with dense figure-eight and overhand knots and generalize to varied textures and appearances. We compare two variants of HULK to three baselines and observe that HULK achieves 43.3% higher success rates on a physical system compared to the next best baseline. HULK successfully untangles a cable from a dense initial configuration containing up to two overhand and figure-eight knots in 97.9% of 378 simulation experiments with an average of 12.1 actions per trial. In physical experiments, HULK achieves 61.7% untangling success, averaging 8.48 actions per trial. Supplementary material, code, and videos can be found at this https URL.

[1]  Masatoshi Ishikawa,et al.  One-handed knotting of a flexible rope with a high-speed multifingered hand having tactile sensors , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Pieter Abbeel,et al.  Superhuman performance of surgical tasks by robots using iterative learning from human-guided demonstrations , 2010, 2010 IEEE International Conference on Robotics and Automation.

[3]  Pieter Abbeel,et al.  Learning from Demonstrations Through the Use of Non-rigid Registration , 2013, ISRR.

[4]  Russ Tedrake,et al.  Dense Object Nets: Learning Dense Visual Object Descriptors By and For Robotic Manipulation , 2018, CoRL.

[5]  Brijen Thananjeyan,et al.  Safety Augmented Value Estimation From Demonstrations (SAVED): Safe Deep Model-Based RL for Sparse Cost Robotic Tasks , 2020, IEEE Robotics and Automation Letters.

[6]  Pieter Abbeel,et al.  Learning to Manipulate Deformable Objects without Demonstrations , 2019, Robotics: Science and Systems.

[7]  Pieter Abbeel,et al.  Model-Ensemble Trust-Region Policy Optimization , 2018, ICLR.

[8]  Jürgen Schmidhuber,et al.  A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Jonathan Tompson,et al.  Towards Accurate Multi-person Pose Estimation in the Wild , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Jeannette Bohg,et al.  Self-Supervised Learning of State Estimation for Manipulating Deformable Linear Objects , 2020, IEEE Robotics and Automation Letters.

[11]  Ross B. Girshick,et al.  Mask R-CNN , 2017, 1703.06870.

[12]  Hidefumi Wakamatsu,et al.  Knotting/Unknotting Manipulation of Deformable Linear Objects , 2006, Int. J. Robotics Res..

[13]  Sergey Levine,et al.  Learning force-based manipulation of deformable objects from multiple demonstrations , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Dieter Fox,et al.  Self-Supervised Visual Descriptor Learning for Dense Correspondence , 2017, IEEE Robotics and Automation Letters.

[15]  Peter Kazanzides,et al.  Software Architecture of the Da Vinci Research Kit , 2017, 2017 First IEEE International Conference on Robotic Computing (IRC).

[16]  Jitendra Malik,et al.  Zero-Shot Visual Imitation , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[17]  Pieter Abbeel,et al.  Learning Robotic Manipulation through Visual Planning and Acting , 2019, Robotics: Science and Systems.

[18]  Katsu Yamane,et al.  Deep Imitation Learning of Sequential Fabric Smoothing Policies , 2019, ArXiv.

[19]  Yuval Tassa,et al.  MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[20]  Ashutosh Saxena,et al.  Tangled: Learning to untangle ropes with RGB-D perception , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Andrew J. Davison,et al.  Sim-to-Real Reinforcement Learning for Deformable Object Manipulation , 2018, CoRL.

[22]  Anand Rangarajan,et al.  A new point matching algorithm for non-rigid registration , 2003, Comput. Vis. Image Underst..

[23]  Katsu Yamane,et al.  VisuoSpatial Foresight for Multi-Step, Multi-Task Fabric Manipulation , 2020, RSS 2020.

[24]  Russ Tedrake,et al.  Self-Supervised Correspondence in Visuomotor Policy Learning , 2019, IEEE Robotics and Automation Letters.

[25]  Devin J. Balkcom,et al.  Tying knot precisely , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[26]  Shubham Tulsiani,et al.  Canonical Surface Mapping via Geometric Cycle Consistency , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[27]  Katsu Yamane,et al.  Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images , 2020, ArXiv.

[28]  Ken Goldberg,et al.  Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation , 2017, ICRA.

[29]  Ken Goldberg,et al.  X-Ray: Mechanical Search for an Occluded Object by Minimizing Support of Learned Occupancy Distributions , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[30]  Ian Taylor,et al.  Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[31]  Brijen Thananjeyan,et al.  Multilateral surgical pattern cutting in 2D orthotropic gauze with deep reinforcement learning policies for tensioning , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[32]  Jitendra Malik,et al.  Combining self-supervised learning and imitation for vision-based rope manipulation , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[33]  Belhassen-Chedli Bouzgarrou,et al.  Robotic manipulation and sensing of deformable objects in domestic and industrial applications: a survey , 2018, Int. J. Robotics Res..

[34]  Pieter Abbeel,et al.  Learning Predictive Representations for Deformable Objects Using Contrastive Estimation , 2020, CoRL.

[35]  Dmitry Berenson,et al.  Occlusion-robust Deformable Object Tracking without Physics Simulation , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[36]  Priya Sundaresan,et al.  Learning Rope Manipulation Policies Using Dense Object Descriptors Trained on Synthetic Depth Data , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).