CARLA Real Traffic Scenarios - novel training ground and benchmark for autonomous driving

This work introduces interactive traffic scenarios in the CARLA simulator, which are based on real-world traffic. We concentrate on tactical tasks lasting several seconds, which are especially challenging for current control methods. The CARLA Real Traffic Scenarios (CRTS) is intended to be a training and testing ground for autonomous driving systems. To this end, we open-source the code under a permissive license and present a set of baseline policies. CRTS combines the realism of traffic scenarios and the flexibility of simulation. We use it to train agents using a reinforcement learning algorithm. We show how to obtain competitive polices and evaluate experimentally how observation types and reward schemes affect the training process and the resulting agent’s behavior.

[1]  Sergey Levine,et al.  (CAD)$^2$RL: Real Single-Image Flight without a Single Real Image , 2016, Robotics: Science and Systems.

[2]  Victor Talpaert,et al.  Deep Reinforcement Learning for Autonomous Driving: A Survey , 2020, IEEE Transactions on Intelligent Transportation Systems.

[3]  Joelle Pineau,et al.  A Dissection of Overfitting and Generalization in Continuous Reinforcement Learning , 2018, ArXiv.

[4]  Amnon Shashua,et al.  Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving , 2016, ArXiv.

[5]  Antonia Breuer,et al.  openDD: A Large-Scale Roundabout Drone Dataset , 2020, 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC).

[6]  Pedro J. Navarro,et al.  A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research , 2019, Sensors.

[7]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[8]  Silvio Savarese,et al.  Learning Social Etiquette: Human Trajectory Understanding In Crowded Scenes , 2016, ECCV.

[9]  Lutz Eckstein,et al.  The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[10]  Keith Redmill,et al.  Top-view Trajectories: A Pedestrian Dataset of Vehicle-Crowd Interaction from Controlled Experiments and Crowded Campus , 2019, 2019 IEEE Intelligent Vehicles Symposium (IV).

[11]  Jianxiong Xiao,et al.  DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[12]  S. Ullman Against direct perception , 1980, Behavioral and Brain Sciences.

[13]  Alexey Dosovitskiy,et al.  End-to-End Driving Via Conditional Imitation Learning , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Sammy Omari,et al.  One Thousand and One Hours: Self-driving Motion Prediction Dataset , 2020, CoRL.

[15]  Ching-Yao Chan,et al.  A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers , 2018, 2018 IEEE Intelligent Vehicles Symposium (IV).

[16]  Christos Dimitrakakis,et al.  TORCS, The Open Racing Car Simulator , 2005 .

[17]  Vladlen Koltun,et al.  Learning by Cheating , 2019, CoRL.

[18]  Amnon Shashua,et al.  Long-term Planning by Short-term Prediction , 2016, ArXiv.

[19]  Taehoon Kim,et al.  Quantifying Generalization in Reinforcement Learning , 2018, ICML.

[20]  Ashish Kapoor,et al.  AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles , 2017, FSR.

[21]  Mikhail Gordon,et al.  Lane Change and Merge Maneuvers for Connected and Automated Vehicles: A Survey , 2016, IEEE Transactions on Intelligent Vehicles.

[22]  Henryk Michalewski,et al.  Simulation-Based Reinforcement Learning for Real-World Autonomous Driving , 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA).

[23]  Yann LeCun,et al.  Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic , 2019, ICLR.

[24]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[25]  Lutz Eckstein,et al.  The inD Dataset: A Drone Dataset of Naturalistic Road User Trajectories at German Intersections , 2019, 2020 IEEE Intelligent Vehicles Symposium (IV).

[26]  Tim Fingscheidt,et al.  Analysis of the Effect of Various Input Representations for LSTM-Based Trajectory Prediction , 2019, 2019 IEEE Intelligent Transportation Systems Conference (ITSC).

[27]  Alexander Carballo,et al.  A Survey of Autonomous Driving: Common Practices and Emerging Technologies , 2019, IEEE Access.

[28]  Ilya Kostrikov,et al.  Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels , 2020, ArXiv.

[29]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[30]  Pieter Abbeel,et al.  Reinforcement Learning with Augmented Data , 2020, NeurIPS.

[31]  Naveed Muhammad,et al.  A Survey of End-to-End Driving: Architectures and Training Methods , 2020, IEEE transactions on neural networks and learning systems.

[32]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[33]  Germán Ros,et al.  CARLA: An Open Urban Driving Simulator , 2017, CoRL.

[34]  Alex Bewley,et al.  Learning to Drive from Simulation without Real World Labels , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[35]  Jun Luo,et al.  Towards Comprehensive Maneuver Decisions for Lane Change Using Reinforcement Learning , 2018 .

[36]  Atil Iscen,et al.  Sim-to-Real: Learning Agile Locomotion For Quadruped Robots , 2018, Robotics: Science and Systems.

[37]  Marcin Andrychowicz,et al.  Solving Rubik's Cube with a Robot Hand , 2019, ArXiv.

[38]  Dean Pomerleau,et al.  ALVINN, an autonomous land vehicle in a neural network , 2015 .

[39]  Anca D. Dragan,et al.  Planning for Autonomous Cars that Leverage Effects on Human Actions , 2016, Robotics: Science and Systems.

[40]  Yohannes Kassahun,et al.  A2D2: Audi Autonomous Driving Dataset , 2020, ArXiv.

[41]  Mayank Bansal,et al.  ChauffeurNet: Learning to Drive by Imitating the Best and Synthesizing the Worst , 2018, Robotics: Science and Systems.