A learning-based framework for handling dilemmas in urban automated driving

Over the last decade, automated vehicles have been widely researched and their massive potential has been verified through several milestone demonstrations. However, there are still many challenges ahead. One of the biggest challenges is integrating them into urban environments in which dilemmas occur frequently. Conventional automated driving strategies make automated vehicles foolish in dilemmas such as making lane-change in heavy traffic, handling a yellow traffic light and crossing a double-yellow line to pass an illegally parked car. In this paper, we introduce a novel automated driving strategy that allows automated vehicles to tackle these dilemmas. The key insight behind our automated driving strategy is that expert drivers understand human interactions on the road and comply with mutually-accepted rules, which are learned from countless experiences. In order to teach the driving strategy of expert drivers to automated vehicles, we propose a general learning framework based on maximum entropy inverse reinforcement learning and Gaussian process. Experiments are conducted on a 5.2 km-long campus road at Seoul National University and demonstrate that our framework performs comparably to expert drivers in planning trajectories to handle various dilemmas.

[1]  Wolfram Burgard,et al.  Learning to predict trajectories of cooperatively navigating agents , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[2]  Andreas Krause,et al.  Unfreezing the robot: Navigation in dense, interacting crowds , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  Helbing,et al.  Social force model for pedestrian dynamics. , 1995, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.

[4]  Markus Wulfmeier,et al.  Maximum Entropy Deep Inverse Reinforcement Learning , 2015, 1507.04888.

[5]  Emilio Frazzoli,et al.  Incremental sampling-based algorithm for minimum-violation motion planning , 2013, 52nd IEEE Conference on Decision and Control.

[6]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.

[7]  Stuart J. Russell Learning agents for uncertain environments (extended abstract) , 1998, COLT' 98.

[8]  Sebastian Thrun,et al.  Apprenticeship learning for motion planning with application to parking lot navigation , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  T. Abeles Moral Machines: Teaching Robots Right from Wrong , 2010 .

[10]  Bernhard Schölkopf,et al.  Learning strategies in table tennis using inverse reinforcement learning , 2014, Biological Cybernetics.

[11]  Wolfram Burgard,et al.  Socially compliant mobile robot navigation via inverse reinforcement learning , 2016, Int. J. Robotics Res..

[12]  Pieter Abbeel,et al.  Apprenticeship learning via inverse reinforcement learning , 2004, ICML.

[13]  Wolfram Burgard,et al.  Learning driving styles for autonomous vehicles from demonstration , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[14]  Xiaodong Li,et al.  Learning a Super Mario controller from examples of human play , 2014, 2014 IEEE Congress on Evolutionary Computation (CEC).

[15]  Emilio Frazzoli,et al.  Least-violating control strategy synthesis with safety rules , 2013, HSCC '13.

[16]  J. C. Gerdes,et al.  Implementable Ethics for Autonomous Vehicles , 2016 .

[17]  Christian Vollmer,et al.  Learning to navigate through crowded environments , 2010, 2010 IEEE International Conference on Robotics and Automation.

[18]  Markus Wulfmeier,et al.  Watch this: Scalable cost-function learning for path planning in urban environments , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[19]  Anind K. Dey,et al.  Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.

[20]  Seung-Woo Seo,et al.  Robust Multitarget Tracking Scheme Based on Gaussian Mixture Probability Hypothesis Density Filter , 2016, IEEE Transactions on Vehicular Technology.