Explainable Robotic Systems

The increasing complexity of robotic systems are pressing the need for them to be transparent and trustworthy. When people interact with a robotic system, they will inevitably construct mental models to understand and predict its actions. However, people»s mental models of robotic systems stem from their interactions with living beings, which induces the risk of establishing incorrect or inadequate mental models of robotic systems and may lead people to either under- and over-trust these systems. We need to understand the inferences that people make about robots from their behavior, and leverage this understanding to formulate and implement behaviors into robotic systems that support the formation of correct mental models of and fosters trust calibration. This way, people will be better able to predict the intentions of these systems, and thus more accurately estimate their capabilities, better understand their actions, and potentially correct their errors. The aim of this full-day workshop is to provide a forum for researchers and practitioners to share and learn about recent research on people»s inferences of robot actions, as well as the implementation of transparent, predictable, and explainable behaviors into robotic systems.

[1]  Pamela J. Hinds,et al.  Who Should I Blame? Effects of Autonomy and Transparency on Attributions in Human-Robot Interaction , 2006, ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication.

[2]  Alan R. Wagner,et al.  Overtrust of robots in emergency evacuation scenarios , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[3]  E. Vincent Cross,et al.  Explaining robot actions , 2012, 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[4]  Andrea Zisman,et al.  Software and Systems Traceability , 2012, Springer London.

[5]  Andreas Theodorou,et al.  Why is my robot behaving like that? Designing transparency for real time inspection of autonomous robots , 2016 .

[6]  Michael Fisher,et al.  Verifying autonomous systems , 2013, CACM.

[7]  Maartje M. A. de Graaf,et al.  How People Explain Action (and Autonomous Intelligent Systems Should Too) , 2017, AAAI Fall Symposia.

[8]  Kerstin Dautenhahn,et al.  Would You Trust a (Faulty) Robot? Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust , 2015, 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[9]  Andreas Theodorou,et al.  Robot transparency, trust and utility , 2016, Connect. Sci..

[10]  Ning Wang,et al.  Trust calibration within a human-robot team: Comparing automatically generated explanations , 2016, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Matthias Scheutz,et al.  Covert robot-robot communication , 2015, HRI 2015.

[12]  Joe Tullio,et al.  How it works: a field study of non-technical users interacting with an intelligent system , 2007, CHI.