As artificial intelligence (AI) systems become increasingly complex and ubiquitous, these systems will be responsible for making decisions that directly affect individuals and society as a whole. Such decisions will need to be justified due to ethical concerns as well as trust, but achieving this has become difficult due to the `black-box' nature many AI models have adopted. Explainable AI (XAI) can potentially address this problem by explaining its actions, decisions and behaviours of the system to users. However, much research in XAI is done in a vacuum using only the researchers' intuition of what constitutes a `good' explanation while ignoring the interaction and the human aspect. This workshop invites researchers in the HCI community and related fields to have a discourse about human-centred approaches to XAI rooted in interaction and to shed light and spark discussion on interaction design challenges in XAI.
[1]
William Dieterich,et al.
Correctional Offender Management Profiles for Alternative Sanctions (COMPAS)
,
2017
.
[2]
Heather Miller Coyle,et al.
Forensic Statistical Tool (FST): A Probabilistic Genotyping Software Program For Human Identification
,
2016
.
[3]
Alexander Binder,et al.
Layer-Wise Relevance Propagation for Deep Neural Network Architectures
,
2016
.
[4]
Pedro M. Domingos,et al.
The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
,
2015
.
[5]
David Mascharka,et al.
Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning
,
2018,
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.