Learning to navigate cloth using haptics

We present a controller that allows an armlike manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.

[1]  Yiannis Demiris,et al.  Iterative path optimisation for personalised dressing assistance using vision and force information , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2]  Charles C. Kemp,et al.  Model predictive control for fast reaching in clutter , 2016, Auton. Robots.

[3]  Pieter Abbeel,et al.  Gravity-Based Robotic Cloth Folding , 2010, WAFR.

[4]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[5]  Geoffrey A. Hollinger,et al.  HERB: a home exploring robotic butler , 2010, Auton. Robots.

[6]  Demis Hassabis,et al.  Mastering the game of Go with deep neural networks and tree search , 2016, Nature.

[7]  Siddhartha S. Srinivasa,et al.  A Framework for Push-Grasping in Clutter , 2011, Robotics: Science and Systems.

[8]  C. Karen Liu,et al.  What does the person feel? Learning to infer applied forces during robot-assisted dressing , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Maren Bennewitz,et al.  Navigation in three-dimensional cluttered environments for mobile manipulation , 2012, 2012 IEEE International Conference on Robotics and Automation.

[10]  Sergey Levine,et al.  High-Dimensional Continuous Control Using Generalized Advantage Estimation , 2015, ICLR.

[11]  C. Karen Liu,et al.  Animating human dressing , 2015, ACM Trans. Graph..

[12]  Sergey Levine,et al.  Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic , 2016, ICLR.

[13]  Dmitry Berenson,et al.  Manipulation of deformable objects without modeling and simulating deformation , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Advait Jain,et al.  Reaching in clutter with whole-arm tactile sensing , 2013, Int. J. Robotics Res..

[15]  Charles C. Kemp,et al.  A Robotic System for Reaching in Dense Clutter that Integrates Model Predictive Control, Learning, Haptic Mapping, and Planning , 2014 .

[16]  Siddhartha S. Srinivasa,et al.  DART: Dynamic Animation and Robotics Toolkit , 2018, J. Open Source Softw..

[17]  Sergey Levine,et al.  Trust Region Policy Optimization , 2015, ICML.

[18]  Taku Komura,et al.  Harmonic parameterization by electrostatics , 2013, TOGS.

[19]  Trevor Darrell,et al.  A geometric approach to robotic laundry folding , 2012, Int. J. Robotics Res..

[20]  Taku Komura,et al.  Character Motion Synthesis by Topology Coordinates , 2009, Comput. Graph. Forum.

[21]  Tae-Yong Kim,et al.  Unified particle physics for real-time applications , 2014, ACM Trans. Graph..

[22]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[23]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[24]  J. Andrew Bagnell,et al.  Perceiving, learning, and exploiting object affordances for autonomous pile manipulation , 2013, Auton. Robots.

[25]  Yuyu Xu,et al.  Towards Cloth-Manipulating Characters , 2014, CASA 2014.

[26]  Stefano Carpin,et al.  Motion planning for cooperative manipulators folding flexible planar objects , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[27]  Jason Weston,et al.  Curriculum learning , 2009, ICML '09.

[28]  Christoph Schuetz,et al.  Adaptive motion control in uncertain environments using tactile feedback , 2016, 2016 IEEE International Conference on Advanced Intelligent Mechatronics (AIM).

[29]  Dmitry Berenson,et al.  A representation of deformable objects for motion planning with no physical simulation , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[30]  Kimitoshi Yamazaki,et al.  Bottom dressing by a life-sized humanoid robot provided failure detection and recovery functions , 2014, 2014 IEEE/SICE International Symposium on System Integration.

[31]  C. Karen Liu,et al.  Data-driven haptic perception for robot-assisted dressing , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[32]  I. Sucan,et al.  Arm Teleoperation in Clutter Using Virtual Constraints from Real Sensor Data , 2013 .