HG-DAgger: Interactive Imitation Learning with Human Experts

Imitation learning has proven to be useful for many real-world problems, but approaches such as behavioral cloning suffer from data mismatch and compounding error issues. One attempt to address these limitations is the DAgger algorithm, which uses the state distribution induced by the novice to sample corrective actions from the expert. Such sampling schemes, however, require the expert to provide action labels without being fully in control of the system. This can decrease safety and, when using humans as experts, is likely to degrade the quality of the collected labels due to perceived actuator lag. In this work, we propose HG-DAgger, a variant of DAgger that is more suitable for interactive imitation learning from human experts in real-world systems. In addition to training a novice policy, HG-DAgger also learns a safety threshold for a model-uncertainty-based risk metric that can be used to predict the performance of the fully trained novice in different regions of the state space. We evaluate our method on both a simulated and real-world autonomous driving task, and demonstrate improved performance over both DAgger and behavioral cloning.

[1]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .

[2]  Ruzena Bajcsy,et al.  Data-driven reachability analysis for human-in-the-loop systems , 2017, 2017 IEEE 56th Annual Conference on Decision and Control (CDC).

[3]  Yee Whye Teh,et al.  Neural Processes , 2018, ArXiv.

[4]  M. Branicky Multiple Lyapunov functions and other analysis tools for switched and hybrid systems , 1998, IEEE Trans. Autom. Control..

[5]  Joelle Pineau,et al.  Maximum Mean Discrepancy Imitation Learning , 2013, Robotics: Science and Systems.

[6]  Jan Peters,et al.  Imitation and Reinforcement Learning , 2010, IEEE Robotics & Automation Magazine.

[7]  Byron Boots,et al.  Agile Off-Road Autonomous Driving Using End-to-End Deep Imitation Learning , 2017, ArXiv.

[8]  Anca D. Dragan,et al.  SHIV: Reducing supervisor burden in DAgger using support vectors for efficient learning from demonstrations in high dimensional state spaces , 2016, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Ruzena Bajcsy,et al.  Improved driver modeling for human-in-the-loop vehicular control , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[10]  Ruzena Bajcsy,et al.  Semiautonomous Vehicular Control Using Driver Modeling , 2014, IEEE Transactions on Intelligent Transportation Systems.

[11]  He He,et al.  Imitation Learning by Coaching , 2012, NIPS.

[12]  Kyunghyun Cho,et al.  Query-Efficient Imitation Learning for End-to-End Autonomous Driving , 2016, ArXiv.

[13]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[14]  Manuela M. Veloso,et al.  Interactive Policy Learning through Confidence-Based Autonomy , 2014, J. Artif. Intell. Res..

[15]  Katherine Rose Driggs-Campbell,et al.  EnsembleDAgger: A Bayesian Approach to Safe Imitation Learning , 2018, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[16]  Blake Hannaford,et al.  Force-reflection and shared compliant control in operating telemanipulators with time delay , 1992, IEEE Trans. Robotics Autom..

[17]  Anca D. Dragan,et al.  Comparing human-centric and robot-centric sampling for robot deep learning from demonstrations , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[18]  Charles Blundell,et al.  Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.

[19]  Duane T. McRuer,et al.  Pilot-Induced Oscillations and Human Dynamic Behavior , 1995 .

[20]  Matthew Derry,et al.  Using machine learning to blend human and robot controls for assisted wheelchair navigation , 2013, 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR).

[21]  Duane T. McRuer,et al.  AVIATION SAFETY AND PILOT CONTROL: UNDERSTANDING AND PREVENTING UNFAVORABLE PILOT-VEHICLE INTERACTIONS , 1997 .

[22]  J. Andrew Bagnell,et al.  Efficient Reductions for Imitation Learning , 2010, AISTATS.

[23]  C. Boutilier,et al.  Accelerating Reinforcement Learning through Implicit Imitation , 2003, J. Artif. Intell. Res..