Probabilistic End-to-End Vehicle Navigation in Complex Dynamic Environments With Multimodal Sensor Fusion

All-day and all-weather navigation is a critical capability for autonomous driving, which requires proper reaction to varied environmental conditions and complex agent behaviors. Recently, with the rise of deep learning, end-to-end control for autonomous vehicles has been well studied. However, most works are solely based on visual information, which can be degraded by challenging illumination conditions such as dim light or total darkness. In addition, they usually generate and apply deterministic control commands without considering the uncertainties in the future. In this letter, based on imitation learning, we propose a probabilistic driving model with multi-perception capability utilizing the information from the camera, lidar and radar. We further evaluate its driving performance online on our new driving benchmark, which includes various environmental conditions (e.g., urban and rural areas, traffic densities, weather and times of the day) and dynamic obstacles (e.g., vehicles, pedestrians, motorcyclists and bicyclists). The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments with heavy traffic and extreme weather.

[1]  Alexey Dosovitskiy,et al.  End-to-End Driving Via Conditional Imitation Learning , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[2]  E. D. Dickmanns,et al.  The development of machine vision for road vehicles in the last decade , 2002, Intelligent Vehicle Symposium, 2002. IEEE.

[3]  Congcong Liu,et al.  Visual-based Autonomous Driving Deployment from a Stochastic and Uncertainty-aware Perspective , 2019, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Ming Liu,et al.  VTGNet: A Vision-based Trajectory Generation Network for Autonomous Vehicles in Urban Environments , 2020, ArXiv.

[5]  Roland Siegwart,et al.  From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Guy Rosman,et al.  Variational End-to-End Navigation and Localization , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[7]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Luke Fletcher,et al.  Uncertainty-Aware Driver Trajectory Prediction at Urban Intersections , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[9]  Luke Fletcher,et al.  A perception‐driven autonomous urban vehicle , 2008, J. Field Robotics.

[10]  Ulrich Kressel,et al.  Probabilistic trajectory prediction with Gaussian mixture models , 2012, 2012 IEEE Intelligent Vehicles Symposium.

[11]  Ming Liu,et al.  High-Speed Autonomous Drifting With Deep Reinforcement Learning , 2020, IEEE Robotics and Automation Letters.

[12]  Silvio Savarese,et al.  Deep Local Trajectory Replanning and Control for Robot Navigation , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[13]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[14]  Vladlen Koltun,et al.  On Offline Evaluation of Vision-based Driving Models , 2018, ECCV.

[15]  Bernard Ghanem,et al.  Driving Policy Transfer via Modularity and Abstraction , 2018, CoRL.

[16]  Mayank Bansal,et al.  ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst , 2018, Robotics: Science and Systems.

[17]  Luc Van Gool,et al.  End-to-End Learning of Driving Models with Surround-View Cameras and Route Planners , 2018, ECCV.

[18]  Eder Santana,et al.  Exploring the Limitations of Behavior Cloning for Autonomous Driving , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[19]  Wei Gao,et al.  Intention-Net: Integrating Planning and Deep Learning for Goal-Directed Autonomous Navigation , 2017, CoRL.

[20]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .

[21]  Germán Ros,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[22]  Leslie Pack Kaelbling,et al.  Differentiable Algorithm Networks for Composable Robot Learning , 2019, Robotics: Science and Systems.

[23]  Ming Liu,et al.  Vision-Based Trajectory Planning via Imitation Learning for Autonomous Vehicles , 2019, 2019 IEEE Intelligent Transportation Systems Conference (ITSC).

[24]  Yang Gao,et al.  End-to-End Learning of Driving Models from Large-Scale Video Datasets , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).