Tactile Sensing and Deep Reinforcement Learning for In-Hand Manipulation Tasks

Deep Reinforcement Learning techniques demonstrate advances in the domain of robotics. One of the limiting factors is the large number of interaction samples usually required for training in simulated and real-world environments. In this work, we demonstrate that tactile information substantially increases sample efficiency for training (by 97% on average), and simultaneously increases the performance in dexterous in-hand manipulation of objects tasks (by 21% on average). To examine the role of tactile-sensor parameters in these improvements, we conducted experiments with varied sensor-measurement accuracy (Boolean vs. float values), and varied spatial resolution of the tactile sensors (92 sensors vs. 16 sensors on the hand). We conclude that ground-truth touchsensor readings as well as dense tactile resolution do not further improve performance and sample efficiency in the tasks. We make available these touch-sensors extensions as a part of OpenAI-Gym robotics Shadow-Dexterous-Hand environments.

[1]  Jakub W. Pachocki,et al.  Learning dexterous in-hand manipulation , 2018, Int. J. Robotics Res..

[2]  Helge J. Ritter,et al.  Flexible and stretchable fabric-based tactile sensor , 2015, Robotics Auton. Syst..

[3]  Helge J. Ritter,et al.  A highly sensitive 3D-shaped tactile sensor , 2013, 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics.

[4]  Jürgen Leitner,et al.  Multisensory assisted in-hand manipulation of objects with a dexterous hand , 2019, ICRA 2019.

[5]  Helge J. Ritter,et al.  Distinguishing sliding from slipping during object pushing , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[6]  Marcin Andrychowicz,et al.  Multi-Goal Reinforcement Learning: Challenging Robotics Environments and Request for Research , 2018, ArXiv.