Combining Physical Simulators and Object-Based Networks for Control

Physics engines play an important role in robot planning and control; however, many real-world control problems involve complex contact dynamics that cannot be characterized analytically. Most physics engines therefore employ approximations that lead to a loss in precision. In this paper, we propose a hybrid dynamics model, simulator-augmented interaction networks (SAIN), combining a physics engine with an object-based neural network for dynamics modeling. Compared with existing models that are purely analytical or purely data-driven, our hybrid model captures the dynamics of interacting objects in a more accurate and data-efficient manner. Experiments both in simulation and on a real robot suggest that it also leads to better performance when used in complex control tasks. Finally, we show that our model generalizes to novel environments with varying object shapes and materials.

[1]  Razvan Pascanu,et al.  Metacontrol for Adaptive Imagination-Based Optimization , 2017, ICLR.

[2]  Razvan Pascanu,et al.  Learning model-based planning from scratch , 2017, ArXiv.

[3]  Emanuel Todorov,et al.  Ensemble-CIO: Full-body dynamic motion planning that transfers to physical humanoids , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Jean-Baptiste Mouret,et al.  Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Jonas Degrave,et al.  A DIFFERENTIABLE PHYSICS ENGINE FOR DEEP LEARNING IN ROBOTICS , 2016, Front. Neurorobot..

[6]  Sergey Levine,et al.  Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[7]  Erwin Coumans,et al.  Bullet physics simulation , 2015, SIGGRAPH Courses.

[8]  Marc Toussaint,et al.  Differentiable Physics and Stable Modes for Tool-Use and Manipulation Planning , 2018, Robotics: Science and Systems.

[9]  Ross A. Knepper,et al.  DeepMPC: Learning Deep Latent Features for Model Predictive Control , 2015, Robotics: Science and Systems.

[10]  Satinder Singh,et al.  Value Prediction Network , 2017, NIPS.

[11]  Nima Fazeli,et al.  Fundamental Limitations in Performance and Interpretability of Common Planar Rigid-Body Contact Models , 2017, ISRR.

[12]  Shimon Whiteson,et al.  TreeQN and ATreeC: Differentiable Tree Planning for Deep Reinforcement Learning , 2017, ICLR 2018.

[13]  Razvan Pascanu,et al.  Imagination-Augmented Agents for Deep Reinforcement Learning , 2017, NIPS.

[14]  Timothy Bretl,et al.  Approximate Steering of a Unicycle Under Bounded Model Perturbation Using Ensemble Control , 2012, IEEE Transactions on Robotics.

[15]  Razvan Pascanu,et al.  Interaction Networks for Learning about Objects, Relations and Physics , 2016, NIPS.

[16]  Maria Bauzá,et al.  A Data-Efficient Approach to Precise and Controlled Pushing , 2018, CoRL.

[17]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[18]  Kuan-Ting Yu,et al.  More than a million ways to be pushed. A high-fidelity experimental dataset of planar pushing , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[19]  J. Andrew Bagnell,et al.  A Fast Stochastic Contact Model for Planar Pushing and Grasping: Theory and Experimental Validation , 2017, Robotics: Science and Systems.

[20]  Tom Schaul,et al.  The Predictron: End-To-End Learning and Planning , 2016, ICML.

[21]  Niloy J. Mitra,et al.  Taking Visual Motion Prediction To New Heightfields , 2019, Comput. Vis. Image Underst..

[22]  Yuval Tassa,et al.  MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[23]  Raia Hadsell,et al.  Graph networks as learnable physics engines for inference and control , 2018, ICML.

[24]  Stefan Schaal,et al.  Combining learned and analytical models for predicting action effects from sensory data , 2017, Int. J. Robotics Res..

[25]  Sergey Levine,et al.  Continuous Deep Q-Learning with Model-based Acceleration , 2016, ICML.

[26]  Allan Jabri,et al.  Universal Planning Networks , 2018, ICML.

[27]  J. Andrew Bagnell,et al.  A convex polynomial force-motion model for planar sliding: Identification and application , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[28]  Alberto Rodriguez,et al.  Feedback Control of the Pusher-Slider System: A Story of Hybrid and Underactuated Contact Dynamics , 2016, WAFR.

[29]  Nima Fazeli,et al.  Learning Data-Efficient Rigid-Body Contact Models: Case Study of Planar Impact , 2017, CoRL.

[30]  Dieter Fox,et al.  SE3-nets: Learning rigid body motion using deep neural networks , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[31]  Joshua B. Tenenbaum,et al.  A Compositional Object-Based Approach to Learning Physical Dynamics , 2016, ICLR.

[32]  Leslie Pack Kaelbling,et al.  Augmenting Physical Simulators with Stochastic Neural Networks: Case Study of Planar Pushing and Bouncing , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[33]  Alberto Rodriguez,et al.  Experimental Validation of Contact Dynamics for In-Hand Manipulation , 2016, ISER.