Towards Accountability: Providing Intelligible Explanations in Autonomous Driving
暂无分享,去创建一个
Marina Jirotka | Daniel Omeiza | Helena Webb | Lars Kunze | M. Jirotka | L. Kunze | Helena Webb | Daniel Omeiza
[1] Nuno Vasconcelos,et al. Explainable Object-Induced Action Decision for Autonomous Vehicles , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Marina Jirotka,et al. Robot Accident Investigation: a case study in Responsible Robotics , 2020, Software Engineering for Robotics.
[3] Matthew Gadd,et al. Sense–Assess–eXplain (SAX): Building Trust in Autonomous Vehicles in Challenging Real-World Driving Scenarios , 2020, 2020 IEEE Intelligent Vehicles Symposium (IV).
[4] Subbarao Kambhampati,et al. Balancing Explicability and Explanation in Human-Aware Planning , 2017, AAAI Fall Symposia.
[5] Gary Klein,et al. Metrics for Explainable AI: Challenges and Prospects , 2018, ArXiv.
[6] John F. Canny,et al. Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[7] Marina Jirotka,et al. Explanations in Autonomous Driving: A Survey , 2021, IEEE Transactions on Intelligent Transportation Systems.
[8] Vineeth N. Balasubramanian,et al. Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks , 2017, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).
[9] Jun Zhao,et al. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.
[10] Sangwon Lee,et al. Effects of explanation types and perceived risk on trust in autonomous vehicles , 2020 .
[11] H. Kelley. The processes of causal attribution. , 1973 .
[12] Suresh Venkatasubramanian,et al. Auditing black-box models for indirect influence , 2016, Knowledge and Information Systems.
[13] Stanley H. Chan,et al. Who Make Drivers Stop? Towards Driver-centric Risk Assessment: Risk Object Identification via Causal Inference , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[14] Marina Jirotka,et al. The Case for an Ethical Black Box , 2017, TAROS.
[15] Been Kim,et al. Sanity Checks for Saliency Maps , 2018, NeurIPS.
[16] Anind K. Dey,et al. Why and why not explanations improve the intelligibility of context-aware intelligent systems , 2009, CHI.
[17] Kate Saenko,et al. Toward Driving Scene Understanding: A Dataset for Learning Driver Behavior and Causal Reasoning , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[18] Trevor Darrell,et al. Textual Explanations for Self-Driving Vehicles , 2018, ECCV.
[19] Yu Zhang,et al. AI Challenges in Human-Robot Cognitive Teaming , 2017, ArXiv.
[20] Hironobu Fujiyoshi,et al. Visual Explanation by Attention Branch Network for End-to-end Learning-based Self-driving , 2019, 2019 IEEE Intelligent Vehicles Symposium (IV).
[21] Paul Voigt,et al. The EU General Data Protection Regulation (GDPR) , 2017 .
[22] Alexander Carballo,et al. A Survey of Autonomous Driving: Common Practices and Emerging Technologies , 2019, IEEE Access.
[23] Guy H. Walker,et al. Models and methods for collision analysis: A comparison study based on the Uber collision with a pedestrian , 2019, Safety Science.
[24] Abhishek Das,et al. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[25] Wendy Ju,et al. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance , 2014, International Journal on Interactive Design and Manufacturing (IJIDeM).
[26] Paul Newman,et al. Reading between the Lanes: Road Layout Reconstruction from Partially Segmented Scenes , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).
[27] Tim Miller,et al. Explanation in Artificial Intelligence: Insights from the Social Sciences , 2017, Artif. Intell..
[28] David Danks,et al. Different "Intelligibility" for Different Folks , 2020, AIES.