DART: Noise Injection for Robust Imitation Learning

One approach to Imitation Learning is Behavior Cloning, in which a robot observes a supervisor and infers a control policy. A known problem with this "off-policy" approach is that the robot's errors compound when drifting away from the supervisor's demonstrations. On-policy, techniques alleviate this by iteratively collecting corrective actions for the current robot policy. However, these techniques can be tedious for human supervisors, add significant computation burden, and may visit dangerous states during training. We propose an off-policy approach that injects noise into the supervisor's policy while demonstrating. This forces the supervisor to demonstrate how to recover from errors. We propose a new algorithm, DART (Disturbances for Augmenting Robot Trajectories), that collects demonstrations with injected noise, and optimizes the noise level to approximate the error of the robot's trained policy during data collection. We compare DART with DAgger and Behavior Cloning in two domains: in simulation with an algorithmic supervisor on the MuJoCo tasks (Walker, Humanoid, Hopper, Half-Cheetah) and in physical experiments with human supervisors training a Toyota HSR robot to perform grasping in clutter. For high dimensional tasks like Humanoid, DART can be up to $3x$ faster in computation time and only decreases the supervisor's cumulative reward by $5\%$ during training, whereas DAgger executes policies that have $80\%$ less cumulative reward than the supervisor. On the grasping in clutter task, DART obtains on average a $62\%$ performance increase over Behavior Cloning.

[1]  Vasilis Z. Marmarelis,et al.  The White-Noise Method in System Identification , 1978 .

[2]  John B. Moore,et al.  Persistence of Excitation in Linear Systems , 1985, 1985 American Control Conference.

[3]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .

[4]  William A. Sethares,et al.  Persistency of excitation and (lack of) robustness in adaptive systems , 1989 .

[5]  S. Sastry,et al.  Adaptive Control: Stability, Convergence and Robustness , 1989 .

[6]  R. Kass,et al.  Shrinkage Estimators for Covariance Matrices , 2001, Biometrics.

[7]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[8]  A. Atiya,et al.  Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond , 2005, IEEE Transactions on Neural Networks.

[9]  J. Andrew Bagnell,et al.  Efficient Reductions for Imitation Learning , 2010, AISTATS.

[10]  Geoffrey J. Gordon,et al.  A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning , 2010, AISTATS.

[11]  Yuval Tassa,et al.  MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Andreas Vlachos,et al.  An investigation of imitation learning algorithms for structured prediction , 2012, EWRL.

[13]  Pieter Abbeel,et al.  Learning from Demonstrations Through the Use of Non-rigid Registration , 2013, ISRR.

[14]  Annick Lesne,et al.  Shannon entropy: a rigorous notion at the crossroads between probability, information theory, dynamical systems and statistical physics , 2014, Mathematical Structures in Computer Science.

[15]  Honglak Lee,et al.  Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning , 2014, NIPS.

[16]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[17]  Sergey Levine,et al.  Trust Region Policy Optimization , 2015, ICML.

[18]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[19]  Stefano Ermon,et al.  Generative Adversarial Imitation Learning , 2016, NIPS.

[20]  Anca D. Dragan,et al.  Robot grasping in clutter: Using a hierarchy of supervisors for learning from demonstrations , 2016, 2016 IEEE International Conference on Automation Science and Engineering (CASE).

[21]  Kostas E. Bekris,et al.  Lessons from the Amazon Picking Challenge , 2016, ArXiv.

[22]  Kyunghyun Cho,et al.  Query-Efficient Imitation Learning for End-to-End Autonomous Driving , 2016, ArXiv.

[23]  Byron Boots,et al.  Deeply AggreVaTeD: Differentiable Imitation Learning for Sequential Prediction , 2017, ICML.

[24]  Anca D. Dragan,et al.  Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[25]  Oliver Brock,et al.  Analysis and Observations From the First Amazon Picking Challenge , 2016, IEEE Transactions on Automation Science and Engineering.