Landing A Mobile Robot Safely from Tall Walls Using Manipulator Motion Generated from Reinforcement Learning

A three-tracked-link robot was designed previously for autonomous welding inside double-hulled ship blocks with tight spaces and protruding stiffeners. Bilge blocks, a type of double-hulled blocks, have a tall wall at the entrance. Climbing down from this tall wall involves a risk of toppling as neither of the three links of the robot (front arm, body, and rear arm) is long enough to reach the ground from the wall top, and the robot carries a heavy manipulator for welding. Instead of being a burden, we explore the use of the manipulator motion to shift the center of gravity, helping the robot climbing down safely. In this paper, we proposed the use of reinforcement learning and physics-based computer simulation to determine suitable motion sequences for safe climbing down from a tall wall. We discovered two effective safe-landing modes that use both arms for major balancing acts and a manipulator for balance trimming during the controlled landing. The method also allowed us to explore the effect of other design factors such as the choice of manipulator size, manipulator motion type, and change in environment on the motion sequence.

[1]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[2]  Anil Kumar,et al.  Design and Integration of a Novel Spatial Articulated Robotic Tail , 2019, IEEE/ASME Transactions on Mechatronics.

[3]  Philip Bachman,et al.  Deep Reinforcement Learning that Matters , 2017, AAAI.

[4]  Avinash Singh,et al.  Rapid Inertial Reorientation of an Aerial Insect-sized Robot Using a Piezo-actuated Tail , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[5]  Wojciech M. Czarnecki,et al.  Grandmaster level in StarCraft II using multi-agent reinforcement learning , 2019, Nature.

[6]  Auke Jan Ijspeert,et al.  On designing an active tail for legged robots: simplifying control via decoupling of control objectives , 2016, Ind. Robot.

[7]  Xi Chen,et al.  Evolution Strategies as a Scalable Alternative to Reinforcement Learning , 2017, ArXiv.

[8]  Chun Fan Goh Improving Machine Learning Methods for Solving Non-Stationary Conditions Based on Data Availability, Time Urgency, and Types of Change , 2020 .

[9]  Demis Hassabis,et al.  Mastering Atari, Go, chess and shogi by planning with a learned model , 2019, Nature.

[10]  Ronald S. Fearing,et al.  Steering of an Underactuated Legged Robot through Terrain Contact with an Active Tail , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[11]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[12]  R. Full,et al.  Tail-assisted pitch control in lizards, robots and dinosaurs , 2012, Nature.

[13]  Pinhas Ben-Tzvi,et al.  Robotic tails: a state-of-the-art review , 2018, Robotica.

[14]  Lei Zhang,et al.  Designing a Mobility Solution for Fully Autonomous Welding of Double-Hull Blocks , 2019 .

[15]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[16]  Andrew Howard,et al.  Design and use paradigms for Gazebo, an open-source multi-robot simulator , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[17]  M. Braae,et al.  Rapid turning at high-speed: Inspirations from the cheetah's tail , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[18]  Daniel Guo,et al.  Agent57: Outperforming the Atari Human Benchmark , 2020, ICML.

[19]  Marcin Andrychowicz,et al.  Solving Rubik's Cube with a Robot Hand , 2019, ArXiv.

[20]  Masayoshi Tomizuka,et al.  A lizard-inspired active tail enables rapid maneuvers and dynamic stabilization in a terrestrial robot , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.