Content Determination for Natural Language Descriptions of Predictive Bayesian Networks

The dramatic success of Artificial Intelligence and its applications has been accompanied by an increasing complexity, which makes its comprehension for final users more difficult and damages their trustworthiness. Within this context, the emergence of Explainable AI aims to make intelligent systems decisions more transparent and understandable for human users. In this paper, we propose a framework for the explanation of predictive inference in Bayesian Networks (BN) in natural language to non-specialized users. The model represents the embedded information in the BN by means of (fuzzy) quantified statements and reasons using the a fuzzy syllogism. The framework provides how this can be used for the content determination stage in Natural Language Generation explanation systems for BNs. Through a number of realistic scenarios of use examples, we show how the generated explanations allows the user to trace the inference steps in the approximate reasoning process in predictive BNs.