Deep Haptic Model Predictive Control for Robot-Assisted Dressing

Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.

[1]  Jan Peters,et al.  Model learning for robot control: a survey , 2011, Cognitive Processing.

[2]  C. Karen Liu,et al.  Data-driven haptic perception for robot-assisted dressing , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[3]  C.M. Gosselin,et al.  Computationally Efficient Predictive Robot Control , 2007, IEEE/ASME Transactions on Mechatronics.

[4]  Nikolaus Hansen,et al.  The CMA Evolution Strategy: A Tutorial , 2016, ArXiv.

[5]  Sergey Levine,et al.  Deep visual foresight for planning robot motion , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[6]  P. Jiménez,et al.  Visual grasp point localization, classification and state recognition in robotic manipulation of cloth: An overview , 2017, Robotics Auton. Syst..

[7]  Sylvain Calinon,et al.  Learning adaptive dressing assistance from human demonstration , 2017, Robotics Auton. Syst..

[8]  Kimitoshi Yamazaki,et al.  Bottom dressing by a life-sized humanoid robot provided failure detection and recovery functions , 2014, 2014 IEEE/SICE International Symposium on System Integration.

[9]  Yiannis Demiris,et al.  Iterative path optimisation for personalised dressing assistance using vision and force information , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[10]  Byron Boots,et al.  Learning predictive models of a depth camera & manipulator from raw execution traces , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Advait Jain,et al.  Reaching in clutter with whole-arm tactile sensing , 2013, Int. J. Robotics Res..

[12]  Charles C. Kemp,et al.  Robotic repositioning of human limbs via model predictive control , 2016, 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN).

[13]  Kimitoshi Yamazaki,et al.  A method of state recognition of dressing clothes based on dynamic state matching , 2013, Proceedings of the 2013 IEEE/SICE International Symposium on System Integration.

[14]  Martin A. Riedmiller,et al.  Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images , 2015, NIPS.

[15]  Rui Pedro Duarte Cortesão,et al.  Model predictive control architectures with force feedback for robotic-assisted beating heart surgery , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[16]  Sergey Levine,et al.  One-shot learning of manipulation skills with online dynamics adaptation and neural network priors , 2015, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[17]  Pierre-Brice Wieber,et al.  Trajectory Free Linear Model Predictive Control for Stable Walking in the Presence of Strong Perturbations , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[18]  Greg Chance,et al.  An assistive robot to support dressing - strategies for planning and error handling , 2016, 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob).

[19]  Manuela M. Veloso,et al.  Personalized Assistance for Dressing Users , 2015, ICSR.

[20]  Yuval Tassa,et al.  Infinite-Horizon Model Predictive Control for Periodic Tasks with Contacts , 2011, Robotics: Science and Systems.

[21]  Pieter Abbeel,et al.  Autonomous Helicopter Aerobatics through Apprenticeship Learning , 2010, Int. J. Robotics Res..

[22]  C. Karen Liu,et al.  What does the person feel? Learning to infer applied forces during robot-assisted dressing , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[23]  Jonathan P. How,et al.  Receding horizon control of autonomous aerial vehicles , 2002, Proceedings of the 2002 American Control Conference (IEEE Cat. No.CH37301).

[24]  Ross A. Knepper,et al.  DeepMPC: Learning Deep Latent Features for Model Predictive Control , 2015, Robotics: Science and Systems.

[25]  Nishanth Koganti,et al.  Bayesian Nonparametric Learning of Cloth Models for Real-Time State Estimation , 2017, IEEE Transactions on Robotics.