To Explain or Not to Explain: A Study on the Necessity of Explanations for Autonomous Vehicles

Explainable AI, in the context of autonomous systems, like self driving cars, has drawn broad interests from researchers. Recent studies have found that providing explanations for an autonomous vehicle actions has many benefits, e.g., increase trust and acceptance, but put little emphasis on when an explanation is needed and how the content of explanation changes with context. In this work, we investigate which scenarios people need explanations and how the critical degree of explanation shifts with situations and driver types. Through a user experiment, we ask participants to evaluate how necessary an explanation is and measure the impact on their trust in the self driving cars in different contexts. We also present a self driving explanation dataset with first person explanations and associated measure of the necessity for 1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Additionally, we propose a learning based model that predicts how necessary an explanation for a given situation in real time, using camera data inputs. Our research reveals that driver types and context dictates whether or not an explanation is necessary and what is helpful for improved interaction and understanding.

[1]  Dawn M. Tilbury,et al.  Situational Awareness, Drivers Trust in Automated Driving Systems and Secondary Task Performance , 2019, SAE International Journal of Connected and Automated Vehicles.

[2]  Sergey Ioffe,et al.  Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.

[3]  Peter A. M. Ruijten,et al.  Enhancing Trust in Autonomous Vehicles through Intelligent User Interfaces That Mimic Human Behavior , 2018, Multimodal Technol. Interact..

[4]  M. Hutson People don’t trust driverless cars. Researchers are trying to change that , 2017 .

[5]  Ruzena Bajcsy,et al.  Optimizing interaction between humans and autonomy via information constraints on interface design , 2017, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).

[6]  Sarah Pink,et al.  Understanding Trust in Automated Vehicles , 2019, OZCHI.

[7]  Wendy Ju,et al.  Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance , 2014, International Journal on Interactive Design and Manufacturing (IJIDeM).

[8]  Wendy Ju,et al.  Emergency, Automation Off: Unstructured Transition Timing for Distracted Drivers of Automated Vehicles , 2015, 2015 IEEE 18th International Conference on Intelligent Transportation Systems.

[9]  S. C. Johnson Hierarchical clustering schemes , 1967, Psychometrika.

[10]  Ruzena Bajcsy,et al.  Improved driver modeling for human-in-the-loop vehicular control , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Guang-Zhong Yang,et al.  XAI—Explainable artificial intelligence , 2019, Science Robotics.

[12]  Lalana Kagal,et al.  Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).

[13]  Dit-Yan Yeung,et al.  Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting , 2015, NIPS.

[14]  Jacob Haspiel,et al.  Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload , 2019, Transportation Research Part C: Emerging Technologies.

[15]  David Whitney,et al.  Periphery-Fovea Multi-Resolution Driving Model Guided by Human Attention , 2019, 2020 IEEE Winter Conference on Applications of Computer Vision (WACV).

[16]  Dawn M. Tilbury,et al.  Explanations and Expectations: Trust Building in Automated Vehicles , 2018, HRI.

[17]  Ellen Enkel,et al.  Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices , 2016 .

[18]  Ruzena Bajcsy,et al.  Semiautonomous Vehicular Control Using Driver Modeling , 2014, IEEE Transactions on Intelligent Transportation Systems.

[19]  Heinrich Hußmann,et al.  I Drive - You Trust: Explaining Driving Behavior Of Autonomous Cars , 2019, CHI Extended Abstracts.

[20]  John H. L. Hansen,et al.  Risky Action Recognition in Lane Change Video Clips using Deep Spatiotemporal Networks with Segmentation Mask Transfer , 2019, 2019 IEEE Intelligent Transportation Systems Conference (ITSC).

[21]  David Whitney,et al.  Predicting Driver Attention in Critical Situations , 2017, ACCV.

[22]  José M. Alonso,et al.  From Zadeh's Computing with Words Towards eXplainable Artificial Intelligence , 2018, WILF.

[23]  Robert F. Tate,et al.  Correlation Between a Discrete and a Continuous Variable. Point-Biserial Correlation , 1954 .

[24]  S. Shankar Sastry,et al.  Towards trustworthy automation: User interfaces that convey internal and external awareness , 2016, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC).

[25]  Øyvind Smogeli,et al.  Trustworthy versus Explainable AI in Autonomous Vessels , 2020, Proceedings of the International Seminar on Safety and Security of Autonomous Vessels (ISSAV) and European STAMP Workshop and Conference (ESWC) 2019.

[26]  Trevor Darrell,et al.  Textual Explanations for Self-Driving Vehicles , 2018, ECCV.

[27]  Juan Enrique Ramos,et al.  Using TF-IDF to Determine Word Relevance in Document Queries , 2003 .

[28]  Yong Gu Ji,et al.  Investigating the Importance of Trust on Adopting an Autonomous Vehicle , 2015, Int. J. Hum. Comput. Interact..