Can active impedance protect robots from landing impact?

This paper studies the effect of passive and active impedance for protecting jumping robots from landing impacts. The theory of force transmissibility is used for selecting the passive impedance of the system to minimize the shock propagation. The active impedance is regulated online by a joint-level controller. On top of this controller, a reflex-based leg retraction scheme is implemented which is optimized using direct policy search reinforcement learning based on particle filtering. Experiments are conducted both in simulation and on a real-world hopping leg. We show that although the impact dynamics is fast, the addition of passive impedance provides enough time for the active impedance controller to react to the impact and protect the robot from damage.

[1]  Andrew G. Barto,et al.  Reinforcement learning , 1998 .

[2]  Oskar von Stryk,et al.  A study of the passive rebound behavior of bipedal robots with stiff and different types of elastic actuation , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[3]  Jan Peters,et al.  Reinforcement learning in robotics: A survey , 2013, Int. J. Robotics Res..

[4]  Clarence W. de Silva,et al.  Mechatronics: An Integrated Approach , 2004 .

[5]  Liverpool Polytechnic Methods of impact absorption when landing from a jump , 1981 .

[6]  Martijn Wisse,et al.  System overview of bipedal robots Flame and TUlip: Tailor-made for Limit Cycle Walking , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[7]  Joel E. Chestnutt,et al.  The Actuator With Mechanically Adjustable Series Compliance , 2010, IEEE Transactions on Robotics.

[8]  Jean-Claude Samin,et al.  Symbolic Modeling of Multibody Systems , 2003 .

[9]  Roland Siegwart,et al.  Reinforcement learning of single legged locomotion , 2013, IROS 2013.

[10]  Darwin G. Caldwell,et al.  Simultaneous discovery of multiple alternative optimal policies by reinforcement learning , 2012, 2012 6th IEEE International Conference Intelligent Systems.

[11]  Alin Albu-Schäffer,et al.  Requirements for Safe Robots: Measurements, Analysis and New Insights , 2009, Int. J. Robotics Res..

[12]  Marco Wiering,et al.  Reinforcement Learning , 2014, Adaptation, Learning, and Optimization.

[13]  Nikolaos G. Tsagarakis,et al.  Hopping at the resonance frequency: A trajectory generation technique for bipedal robots with elastic joints , 2012, 2012 IEEE International Conference on Robotics and Automation.

[14]  Darwin G. Caldwell,et al.  Reinforcement Learning in Robotics: Applications and Real-World Challenges , 2013, Robotics.

[15]  Oskar von Stryk,et al.  Detailed dynamics modeling of BioBiped's monoarticular and biarticular tendon-driven actuation system , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Nikolaos G. Tsagarakis,et al.  An asymmetric compliant antagonistic joint design for high performance mobility , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[17]  Nikolaos G. Tsagarakis,et al.  Compliant antagonistic joint tuning for gravitational load cancellation and improved efficient mobility , 2014, 2014 IEEE-RAS International Conference on Humanoid Robots.

[18]  Darwin G. Caldwell,et al.  Direct policy search reinforcement learning based on particle filtering , 2012, EWRL 2012.

[19]  Yasuo Kuniyoshi,et al.  Design of a Musculoskeletal Athlete Robot: A Biomechanical Approach , 2009 .

[20]  Kazuhito Yokoi,et al.  Motion Pattern for the Landing Phase of a Vertical Jump for Humanoid Robots , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Nikolaos G. Tsagarakis,et al.  Bipedal walking energy minimization by reinforcement learning with evolving policy parameterization , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.