Based on current capabilities, many Machine Learning techniques are often inscrutable and they can be hard for users to trust because they lack effective means of generating explanations for their outputs. There is much research and development investigating this area, with a wide variety of proposed explanation techniques for AI/ML across a variety of data modalities. In this paper we investigate which modality of explanation to choose for a particular user and task, taking into account relevant contextual information such as the time available to them, their level of skill, what level of access they have to the data and sensors in question, and the device that they are using. Additional environmental factors such as available bandwidth, currently usable sensors and services are also able to be accounted for. The explanation techniques that we are investigating range across transparent and post-hoc mechanisms and form part of a conversation with the user in which the explanation (and therefore human understanding of the AI decision) can be ascertained through dialogue with the system. Our research is exploring generic techniques that can be used to underpin useful explanations in a range of modalities in the context of AI/ML services that operate on multisensor data in a distributed, dynamic, contested and adversarial setting. We define a meta-model for representing this information and through a series of examples show how this approach can be used to support conversational explanation across a range of situations, datasets and modalities.
[1]
Zachary Chase Lipton.
The mythos of model interpretability
,
2016,
ACM Queue.
[2]
Alun Preece,et al.
Multimodal explanations for AI-based multisensor fusion
,
2018
.
[3]
Alun D. Preece,et al.
Distributed opportunistic sensing and fusion for traffic congestion detection
,
2017,
2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI).
[4]
Nicholas Diakopoulos,et al.
Accountability in algorithmic decision making
,
2016,
Commun. ACM.
[5]
Arvind Satyanarayan,et al.
The Building Blocks of Interpretability
,
2018
.
[6]
Alun Preece,et al.
Integrating learning and reasoning services for explainable information fusion
,
2018
.
[7]
Alun Preece,et al.
Conversational homes: a uniform natural language approach for collaboration among humans and devices
,
2017
.
[8]
Carlos Guestrin,et al.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
,
2016,
ArXiv.
[9]
Alun D. Preece,et al.
Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems
,
2018,
ArXiv.
[10]
Tim Miller,et al.
Explanation in Artificial Intelligence: Insights from the Social Sciences
,
2017,
Artif. Intell..