Interpretation of Neural Networks for Classification Tasks
暂无分享,去创建一个
To overcome the black box behaviour of neural networks many different approaches have been proposed. Up to now there is no standard tool being able to handle general feedforward networks. In this paper a method is proposed to combine visualization techniques with transformation algorithms to interpret neural feedforward networks. An application in ultrasonic crack detection will show that the overcoming of the black box structure of neural networks is not academical. It shows that the optimization of a network after training is possible and highly usable.
[1] Lorien Y. Pratt,et al. Information Measure Based Skeletonisation , 1991, NIPS.
[2] David Haussler,et al. Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.
[3] Anil K. Jain,et al. Artificial neural networks for feature extraction and multivariate data projection , 1995, IEEE Trans. Neural Networks.
[4] Wolfgang Eppler. Symbolic Learning in Connectionist Production Systems , 1993 .