End-To-End Interpretable Neural Motion Planner

In this paper, we propose a neural motion planner for learning to drive autonomously in complex urban scenarios that include traffic-light handling, yielding, and interactions with multiple road-users. Towards this goal, we design a holistic model that takes as input raw LIDAR data and a HD map and produces interpretable intermediate representations in the form of 3D detections and their future trajectories, as well as a cost volume defining the goodness of each position that the self-driving car can take within the planning horizon. We then sample a set of diverse physically possible trajectories and choose the one with the minimum learned cost. Importantly, our cost volume is able to naturally capture multi-modality. We demonstrate the effectiveness of our approach in real-world driving data captured in several cities in North America. Our experiments show that the learned cost volume can generate safer planning than all the baselines.

[1]  Sergio Casas,et al.  IntentNet: Learning to Predict Intention from Raw Sensor Data , 2018, CoRL.

[2]  Wei Zhan,et al.  A non-conservatively defensive strategy for urban autonomous driving , 2016, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC).

[3]  Cewu Lu,et al.  Virtual to Real Reinforcement Learning for Autonomous Driving , 2017, BMVC.

[4]  S. Zucker,et al.  Toward Efficient Trajectory Planning: The Path-Velocity Decomposition , 1986 .

[5]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .

[6]  Anind K. Dey,et al.  Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.

[7]  Xiaogang Wang,et al.  Pyramid Scene Parsing Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Alexey Dosovitskiy,et al.  End-to-End Driving Via Conditional Imitation Learning , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Changchun Liu,et al.  Baidu Apollo EM Motion Planner , 2018, ArXiv.

[10]  Matthew McNaughton,et al.  Parallel Algorithms for Real-time Motion Planning , 2011 .

[11]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[12]  Sanjiv Singh,et al.  The DARPA Urban Challenge: Autonomous Vehicles in City Traffic, George Air Force Base, Victorville, California, USA , 2009, The DARPA Urban Challenge.

[13]  Markus Wulfmeier,et al.  Maximum Entropy Deep Inverse Reinforcement Learning , 2015, 1507.04888.

[14]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[15]  Klaus-Dieter Kuhnert,et al.  Wiggling through complex traffic: Planning trajectories constrained by predictions , 2016, 2016 IEEE Intelligent Vehicles Symposium (IV).

[16]  Jianxiong Xiao,et al.  DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[17]  Emilio Frazzoli,et al.  A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles , 2016, IEEE Transactions on Intelligent Vehicles.

[18]  Germán Ros,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[19]  David Janz,et al.  Learning to Drive in a Day , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[20]  Michael Stolz,et al.  Search-Based Optimal Motion Planning for Automated Driving , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[21]  Sanjiv Singh,et al.  Path Generation for Robot Vehicles Using Composite Clothoid Segments , 1990 .

[22]  Julius Ziegler,et al.  Trajectory planning for Bertha — A local, continuous method , 2014, 2014 IEEE Intelligent Vehicles Symposium Proceedings.

[23]  Raquel Urtasun,et al.  Understanding the Effective Receptive Field in Deep Convolutional Neural Networks , 2016, NIPS.

[24]  Mark E. Campbell,et al.  Contingency Planning Over Probabilistic Obstacle Predictions for Autonomous Road Vehicles , 2013, IEEE Transactions on Robotics.

[25]  Bin Yang,et al.  Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[26]  Christian Laugier,et al.  Path-velocity decomposition revisited and applied to dynamic trajectory planning , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.

[27]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[28]  Paul Vernaza,et al.  r2p2: A ReparameteRized Pushforward Policy for Diverse, Precise Generative Path Forecasting , 2018, ECCV.

[29]  Bernard Ghanem,et al.  Driving Policy Transfer via Modularity and Abstraction , 2018, CoRL.

[30]  Wei Zhan,et al.  Constrained iterative LQR for on-road autonomous driving motion planning , 2017, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).

[31]  Andreas Geiger,et al.  Conditional Affordance Learning for Driving in Urban Environments , 2018, CoRL.

[32]  Wei Liu,et al.  SSD: Single Shot MultiBox Detector , 2015, ECCV.

[33]  Bin Yang,et al.  PIXOR: Real-time 3D Object Detection from Point Clouds , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[34]  Sebastian Thrun,et al.  Junior: The Stanford entry in the Urban Challenge , 2008, J. Field Robotics.

[35]  Julius Ziegler,et al.  Optimal trajectory generation for dynamic street scenarios in a Frenét Frame , 2010, 2010 IEEE International Conference on Robotics and Automation.