Measuring driving performance for an All-Terrain Vehicle in a paved road in the woods

This paper provides a technique to rate human driving skills. This technique is initiated using a combination of deep learning in computer vision and Inverse-Reinforcement Learning (IRL). The main idea is to transfer knowledge through human-machine-human. The use of machine learning aims at teaching an agent the best driving attitudes from a human expert driver, then this agent is used to teach humans once again. To achieve that goal, deep learning semantic segmentation (S.S.) efficient network (ENet) was used to detect the road. IRL was used to get the reward function from an expert driver's behavior while driving the All-Terrain Vehicle (ATV) in the woods.

[1]  Andrew Y. Ng,et al.  Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.

[2]  Mark R. Savino Standardized names and definitions for driving performance measures , 2009 .

[3]  A. Fischer Inverse Reinforcement Learning , 2012 .

[4]  Pieter Abbeel,et al.  Apprenticeship learning via inverse reinforcement learning , 2004, ICML.

[5]  Anind K. Dey,et al.  Maximum Entropy Inverse Reinforcement Learning , 2008, AAAI.

[6]  Eduardo F. Morales,et al.  An Introduction to Reinforcement Learning , 2011 .

[7]  Masamichi Shimosaka,et al.  Modeling risk anticipation and defensive driving on residential roads with inverse reinforcement learning , 2014, 17th International IEEE Conference on Intelligent Transportation Systems (ITSC).

[8]  Karel Brookhuis,et al.  MEASURING DRIVING PERFORMANCE BY CAR-FOLLOWING IN TRAFFIC , 1994 .

[9]  Hirokatsu Kataoka,et al.  Predicting driving behavior using inverse reinforcement learning with multiple reward functions towards environmental diversity , 2015, 2015 IEEE Intelligent Vehicles Symposium (IV).

[10]  James M. Conrad,et al.  Components of an autonomous all-terrain vehicle , 2010, Proceedings of the IEEE SoutheastCon 2010 (SoutheastCon).

[11]  Patrick M. Pilarski,et al.  Reactive Reinforcement Learning in Asynchronous Environments , 2018, Front. Robot. AI.

[12]  J. M. Conrad,et al.  Autonomous all-terrain vehicle steering , 2012, 2012 Proceedings of IEEE Southeastcon.

[13]  Yoshihiko Suhara,et al.  Driver behavior profiling: An investigation with different smartphone sensors and machine learning , 2017, PloS one.

[14]  Dimitar Filev,et al.  Advanced planning for autonomous vehicles using reinforcement learning and deep inverse reinforcement learning , 2019, Robotics Auton. Syst..

[15]  Romain Laroche,et al.  Hybrid Reward Architecture for Reinforcement Learning , 2017, NIPS.

[16]  Sergey Levine,et al.  Nonlinear Inverse Reinforcement Learning with Gaussian Processes , 2011, NIPS.

[17]  James M. Conrad,et al.  Using a CAN bus for control of an All-terrain Vehicle , 2014, IEEE SOUTHEASTCON 2014.

[18]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Jan Peters,et al.  Relative Entropy Inverse Reinforcement Learning , 2011, AISTATS.

[20]  Oliver Kroemer,et al.  Active Reward Learning , 2014, Robotics: Science and Systems.

[21]  Pieter Abbeel,et al.  Inverse Reinforcement Learning , 2010, Encyclopedia of Machine Learning and Data Mining.

[22]  Eyal Amir,et al.  Bayesian Inverse Reinforcement Learning , 2007, IJCAI.

[23]  James M. Conrad,et al.  System Integration over a CAN Bus for a Self-Controlled, Low-Cost Autonomous All-terrain Vehicle , 2019, 2019 SoutheastCon.