Explaining Results of Neural Networks by Contextual Importance and Utility

The use of neural networks is still difficult in many application areas due to the lack of explanation facilities (the « black box » problem). The concepts of contextual importance and contextual utility presented make it possible to explain the results of neural networks in a user-understandable way. The explanations obtained are of same quality as those of expert systems, but they may be more flexible since the reasoning module and the explanation module are completely separated. The numerical complexity of estimating the contextual importance and contextual utility is to a great extent solved by the neural net proposed (INKA), which also has good function approximation and training properties.