Effects of Influence on User Trust in Predictive Decision Making

This paper introduces fact-checking into Machine Learning (ML) explanation by referring training data points as facts to users to boost user trust. We aim to investigate what influence of training data points, and how they affect user trust in order to enhance ML explanation and boost user trust. We tackle this question by allowing users check the training data points that have the higher influence and the lower influence on the prediction. A user study found that the presentation of influences significantly increases the user trust in predictions, but only for training data points with higher influence values under the high model performance condition, where users can justify their actions with more similar facts.