An Uncertainty-Aware Minimal Intervention Control Strategy Learned from Demonstrations

Motivated by the desire to have robots physically present in human environments, in recent years we have witnessed an emergence of different approaches for learning active compliance. Some of the most compelling solutions exploit a minimal intervention control principle, correcting deviations from a goal only when necessary, and among those who follow this concept, several probabilistic techniques have stood out from the rest. However, these approaches are prone to requiring several task demonstrations for proper gain estimation and to generating unpredictable robot motions in the face of uncertainty. Here we present a Programming by Demonstration approach for uncertainty-aware impedance regulation, aimed at making the robot compliant - and safe to interact with - when the uncertainty about its predicted actions is high. Moreover, we propose a data-efficient strategy, based on the energy observed during demonstrations, to achieve minimal intervention control, when the uncertainty is low. The approach is validated in an experimental scenario, where a human collaboratively moves an object with a 7-DoF torque-controlled robot.

[1]  Sandra Hirche,et al.  Bayesian uncertainty modeling for programming by demonstration , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[2]  Sandra Hirche,et al.  Uncertainty-dependent optimal control for robot control considering high-order cost statistics , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[3]  Sergey Levine,et al.  Learning force-based manipulation of deformable objects from multiple demonstrations , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Emanuel Todorov,et al.  Optimal Control Theory , 2006 .

[5]  Aude Billard,et al.  Learning Compliant Manipulation through Kinesthetic and Tactile Human-Robot Interaction , 2014, IEEE Transactions on Haptics.

[6]  Darwin G. Caldwell,et al.  Variable duration movement encoding with minimal intervention control , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Darwin G. Caldwell,et al.  Learning optimal controllers in human-robot cooperative transportation tasks with position and force constraints , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[8]  Aude Billard,et al.  Learning from Humans , 2016, Springer Handbook of Robotics, 2nd Ed..

[9]  Darwin G. Caldwell,et al.  Probabilistic Learning of Torque Controllers from Kinematic and Force Constraints , 2017, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[10]  Sandra Hirche,et al.  Risk-Sensitive Optimal Feedback Control for Haptic Assistance , 2012, 2012 IEEE International Conference on Robotics and Automation.

[11]  Darwin G. Caldwell,et al.  A task-parameterized probabilistic model with minimal intervention control , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[12]  K. Dautenhahn,et al.  Imitation in Animals and Artifacts , 2002 .

[13]  Joseph Hamill,et al.  Biomechanical Basis of Human Movement , 1995 .

[14]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[15]  E. Todorov Optimality principles in sensorimotor control , 2004, Nature Neuroscience.

[16]  Andrew Gordon Wilson,et al.  Generalised Wishart Processes , 2010, UAI.

[17]  Oussama Khatib,et al.  A unified approach for motion and force control of robot manipulators: The operational space formulation , 1987, IEEE J. Robotics Autom..