We propose an Explainable AI model that can be employed in order to explain why a customer buys or abandons a non-life insurance coverage. The method consists in applying similarity clustering to the Shapley values that were obtained from a highly accurate XGBoost predictive classification algorithm. Our proposed method can be embedded into a technologically-based insurance service (Insurtech), allowing to understand, in real time, the factors that most contribute to customers’ decisions, thereby gaining proactive insights on their needs. We prove the validity of our model with an empirical analysis that was conducted on data regarding purchases of insurance micro-policies. Two aspects are investigated: the propensity to buy an insurance policy and the risk of churn of an existing customer. The results from the analysis reveal that customers can be effectively and quickly grouped according to a similar set of characteristics, which can predict their buying or churn behaviour well.
[1]
Paolo Giudici,et al.
Shapley-Lorenz eXplainable Artificial Intelligence
,
2020,
Expert Syst. Appl..
[2]
Franco Turini,et al.
A Survey of Methods for Explaining Black Box Models
,
2018,
ACM Comput. Surv..
[4]
Chandan Singh,et al.
Definitions, methods, and applications in interpretable machine learning
,
2019,
Proceedings of the National Academy of Sciences.
[5]
Hugh Chen,et al.
From local explanations to global understanding with explainable AI for trees
,
2020,
Nature Machine Intelligence.
[6]
Erik Strumbelj,et al.
An Efficient Explanation of Individual Classifications using Game Theory
,
2010,
J. Mach. Learn. Res..
[7]
Tianqi Chen,et al.
XGBoost: A Scalable Tree Boosting System
,
2016,
KDD.