Accurate Vision-based Manipulation through Contact Reasoning

Planning contact interactions is one of the core challenges of many robotic tasks. Optimizing contact locations while taking dynamics into account is computationally costly and, in environments that are only partially observable, executing contact-based tasks often suffers from low accuracy. We present an approach that addresses these two challenges for the problem of vision-based manipulation. First, we propose to disentangle contact from motion optimization. Thereby, we improve planning efficiency by focusing computation on promising contact locations. Second, we use a hybrid approach for perception and state estimation that combines neural networks with a physically meaningful state representation. In simulation and real-world experiments on the task of planar pushing, we show that our method is more efficient and achieves a higher manipulation accuracy than previous vision-based approaches.

[1]  Dmitry Berenson,et al.  Efficient Humanoid Contact Planning using Learned Centroidal Dynamics Prediction , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[2]  Sergey Levine,et al.  Robustness via Retrying: Closed-Loop Robotic Manipulation with Self-Supervised Learning , 2018, CoRL.

[3]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[4]  Byron Boots,et al.  Joint Inference of Kinematic and Force Trajectories with Visuo-Tactile Sensing , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[5]  Kuan-Ting Yu,et al.  More than a million ways to be pushed. A high-fidelity experimental dataset of planar pushing , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[6]  Mark R. Cutkosky,et al.  Practical Force-Motion Models for Sliding Manipulation , 1996, Int. J. Robotics Res..

[7]  Jiajun Wu,et al.  Combining Physical Simulators and Object-Based Networks for Control , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[8]  David Hsu,et al.  Push-Net: Deep Planar Pushing for Objects with Unknown Physical Properties , 2018, Robotics: Science and Systems.

[9]  J. Andrew Bagnell,et al.  A convex polynomial model for planar sliding mechanics: theory, application, and experimental validation , 2018, Int. J. Robotics Res..

[10]  Sergey Levine,et al.  Deep visual foresight for planning robot motion , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Alberto Rodriguez,et al.  Reactive Planar Manipulation with Convex Hybrid MPC , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[12]  François Osiurak,et al.  What is an affordance? 40 years later , 2017, Neuroscience & Biobehavioral Reviews.

[13]  Leslie Pack Kaelbling,et al.  Omnipush: accurate, diverse, real-world dataset of pushing dynamics with RGB-D video , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[14]  James M. Rehg,et al.  Learning contact locations for pushing and orienting unknown objects , 2013, 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids).

[15]  A. Ruina,et al.  Planar sliding with dry friction Part 1. Limit surface and moment function , 1991 .

[16]  Alberto Rodriguez,et al.  In-Hand Manipulation via Motion Cones , 2018, Robotics: Science and Systems.

[17]  Robin Deits,et al.  Footstep planning on uneven terrain with mixed-integer convex optimization , 2014, 2014 IEEE-RAS International Conference on Humanoid Robots.

[18]  Maria Bauza,et al.  A probabilistic data-driven model for planar pushing , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[19]  Claudio Zito,et al.  Feature-Based Transfer Learning for Robotic Push Manipulation , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[20]  Stefan Schaal,et al.  Combining learned and analytical models for predicting action effects from sensory data , 2017, Int. J. Robotics Res..

[21]  Kevin M. Lynch,et al.  Locally controllable manipulation by stable pushing , 1999, IEEE Trans. Robotics Autom..

[22]  Kazuo Tanie,et al.  Manipulation And Active Sensing By Pushing Using Tactile Feedback , 1992, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems.

[23]  Maria Bauzá,et al.  A Data-Efficient Approach to Precise and Controlled Pushing , 2018, CoRL.

[24]  Sergey Levine,et al.  Self-Supervised Visual Planning with Temporal Skip Connections , 2017, CoRL.

[25]  Claudio Zito,et al.  Two-level RRT planning for robotic push manipulation , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[26]  Mehmet Remzi Dogar,et al.  Combining Coarse and Fine Physics for Manipulation using Parallel-in-Time Integration , 2019, ISRR.

[27]  Claudio Zito,et al.  Let's Push Things Forward: A Survey on Robot Pushing , 2019, Frontiers in Robotics and AI.

[28]  Kuan-Ting Yu,et al.  Realtime State Estimation with Tactile and Visual Sensing. Application to Planar Manipulation , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[29]  Jitendra Malik,et al.  Learning to Poke by Poking: Experiential Learning of Intuitive Physics , 2016, NIPS.