Neural networks are commonly regarded as black boxes performing incomprehensible functions. For classification problems networks provide maps from high dimensional feature space to K-dimensional image space. Images of training vector are projected on polygon vertices, providing visualization of network function. Such visualization may show the dynamics of learning, allow for comparison of different networks, display training vectors around which potential problems may arise, show differences due to regularization and optimization procedures, investigate stability of network classification under perturbation of original vectors, and place new data sample in relation to training data, allowing for estimation of confidence in classification of a given sample. An illustrative example for the three-class Wine data and five-class Satimage data is described. The visualization method proposed here is applicable to any black box system that provides continuous outputs.
[1]
Heekuck Oh,et al.
Neural Networks for Pattern Recognition
,
1993,
Adv. Comput..
[2]
Antoine Naud,et al.
Neural and Statistical Methods for the Visualization of Multidimensional Data
,
2001
.
[3]
Catherine Blake,et al.
UCI Repository of machine learning databases
,
1998
.
[4]
J A Swets,et al.
Measuring the accuracy of diagnostic systems.
,
1988,
Science.
[5]
Wlodzislaw Duch,et al.
A new methodology of extraction, optimization and application of crisp and fuzzy logical rules
,
2001,
IEEE Trans. Neural Networks.