Feature saliency measures

Abstract This paper presents a survey of feature saliency measures used in artificial neural networks. Saliency measures can be used for assessing a feature's relative importance. In this paper, we contrast two basic philosophies for measuring feature saliency or importance within a feed-forward neural network. One philosophy is to evaluate each feature with respect to relative changes in either the neural network's output or the neural network's probability of error. We refer to this as a derivative-based philosophy of feature saliency. Using the derivative-based philosophy, we propose a new and more efficient probability of error measure. A second philosophy is to measure the relative size of the weight vector emanating from each feature. We refer to this as a weight-based philosophy of feature saliency. We derive several unifying relationships which exist within the derivative-based feature saliency measures, as well as between the derivative and the weight-based feature saliency measures. We also report experimental results for an target recognition problem using a number of derivative-based and weight-based saliency measures.

[1]  Kenneth W. Bauer,et al.  Determining input features for multilayer perceptrons , 1995, Neurocomputing.

[2]  G. L. Tarr,et al.  Multi-layered feedforward neural networks for image segmentation , 1992 .

[3]  Casimir C. Klimasauskas Neural nets tell why , 1991 .

[4]  Bruce W. Suter,et al.  The multilayer perceptron as an approximation to a Bayes optimal discriminant function , 1990, IEEE Trans. Neural Networks.

[5]  Halbert White,et al.  Artificial Neural Networks: Approximation and Learning Theory , 1992 .

[6]  Wright-Patterson Afb,et al.  Feature Selection Using a Multilayer Perceptron , 1990 .

[7]  R. E. Uhrig,et al.  Sensitivity analysis and applications to nuclear power plant , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[8]  Paul J. Werbos,et al.  Applications of advances in nonlinear sensitivity analysis , 1982 .

[9]  S. Hashem,et al.  Sensitivity analysis for feedforward artificial neural networks with differentiable activation functions , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[10]  Steven K. Rogers,et al.  An Approach To Multiple Sensor Target Detection , 1989, Defense, Security, and Sensing.

[11]  Kenneth W. Bauer,et al.  Improved feature screening in feedforward neural networks , 1996, Neurocomputing.

[12]  Kenneth W. Bauer,et al.  Integrated feature architecture selection , 1996, IEEE Trans. Neural Networks.

[13]  Steven K. Rogers,et al.  Bayesian selection of important features for feedforward neural networks , 1993, Neurocomputing.