Analysis of neural networks in terms of domain functions

Despite their success-story, artificial neural networks have one major disadvantage compared to other techniques: the inability to explain comprehensively how a trained neural network reaches its output; neural networks are not only (incorrectly) seen as a "magic tool" but possibly even more as a mysterious "black box". Although much research has already been done to "open the box," there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this paper we propose a wider applicable method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network's function and, depending on the chosen base functions, it may also provide an insight into the neural network' s inner "reasoning." It could further be used to optimize neural network systems. An analysis in terms of base functions may even make clear how to (re)construct a superior system using those base functions, thus using the neural network as a construction advisor.

[1]  Spencer,et al.  A neural network for tactile sensing: the Hertzian contact problem , 1989 .

[2]  Lambert Spaanenburg,et al.  PREPARING FOR KNOWLEDGE EXTRACTION IN MODULAR NEURAL NETWORKS , 2002 .

[3]  John Mitchell,et al.  A geometric interpretation of hidden layer units in feedforward neural networks , 1992 .

[4]  David Ríos Insua,et al.  Sensitivity analysis in multi-objective decision making , 1990 .

[5]  N. E. Sharkey,et al.  Diversity , Neural Nets and Safety Critical Applications , 1995 .

[6]  Joachim Diederich,et al.  Survey and critique of techniques for extracting rules from trained artificial neural networks , 1995, Knowl. Based Syst..

[7]  Casimir C. Klimasauskas Neural nets tell why , 1991 .

[8]  L. Fu,et al.  Sensitivity analysis for input vector in multilayer feedforward neural networks , 1993, IEEE International Conference on Neural Networks.

[9]  Lennart Ljung,et al.  Nonlinear black-box modeling in system identification: a unified overview , 1995, Autom..

[10]  Chong-Ho Choi,et al.  Sensitivity analysis of multilayer perceptron with differentiable activation functions , 1992, IEEE Trans. Neural Networks.

[11]  Andries P. Engelbrecht,et al.  A sensitivity analysis algorithm for pruning feedforward neural networks , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).

[12]  Jude W. Shavlik,et al.  Using Sampling and Queries to Extract Rules from Trained Neural Networks , 1994, ICML.

[13]  S. Hashem,et al.  Sensitivity analysis for feedforward artificial neural networks with differentiable activation functions , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.