Comparing datasets for generalizing models of driver intent in dynamic environments

In light of growing attention of intelligent vehicle systems, we have present an assessment of methods for driver models that predict driver behaviors. This work looks at varying datasets to see the affects on intent detection algorithms. The motivation is to understand and assess how data is mapped from datasets to discrete states or modes of intent. Using a model of a human driver's decision making process to estimate intent, we build techniques for analyzing and learning human behaviors to improve understanding. We derive models based off of human perception and interaction with the environment (e.g. other vehicles on the road), that is generalizable and flexible enough to detect intent across different drivers. The resulting detection scheme is able to determine driver intent with high accuracy across multiple drivers, relying on a large dataset consisting of lane changes under varying environmental constraints. By comparing different labeling methods, we assess the effectiveness of learned models under different class variations. This allows us to derive accurate and general models for detecting intent that rely on the subtle variations and behaviors that humans exhibit while driving.

[1]  Takayuki Kondoh,et al.  Identification of Visual Cues and Quantification of Drivers' Perception of Proximity Risk to the Lead Vehicle in Car-Following Situations , 2008 .

[2]  John D. Lee,et al.  Driver Distraction : Theory, Effects, and Mitigation , 2008 .

[3]  John Yen,et al.  Agents with shared mental models for enhancing team decision makings , 2006, Decis. Support Syst..

[4]  Ruzena Bajcsy,et al.  Identifying Modes of Intent from Driver Behaviors in Dynamic Environments , 2015, 2015 IEEE 18th International Conference on Intelligent Transportation Systems.

[5]  S. Shankar Sastry,et al.  Experimental Design for Human-in-the-Loop Driving Simulations , 2014, ArXiv.

[6]  Trevor Hastie,et al.  The Elements of Statistical Learning , 2001 .

[7]  Maria E. Jabon,et al.  Facial expression analysis for predicting unsafe driving behavior , 2011, IEEE Pervasive Computing.

[8]  V. Manera,et al.  Grasping intentions: from thought experiments to empirical evidence , 2012, Front. Hum. Neurosci..

[9]  Ruzena Bajcsy,et al.  Improved driver modeling for human-in-the-loop vehicular control , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[10]  Ruzena Bajcsy,et al.  Semiautonomous Vehicular Control Using Driver Modeling , 2014, IEEE Transactions on Intelligent Transportation Systems.

[11]  J. March,et al.  Information in Organizations as Signal and Symbol. , 1981 .

[12]  Eric R. Ziegel,et al.  The Elements of Statistical Learning , 2003, Technometrics.

[13]  Todd Litman,et al.  Autonomous Vehicle Implementation Predictions: Implications for Transport Planning , 2015 .

[14]  Andy Liaw,et al.  Classification and Regression by randomForest , 2007 .

[15]  Mohan M. Trivedi,et al.  On-road prediction of driver's intent with multimodal sensory cues , 2011, IEEE Pervasive Computing.

[16]  William Whittaker,et al.  Autonomous driving in urban environments: Boss and the Urban Challenge , 2008, J. Field Robotics.