Statistical Sensitivity Measure of Single Layer Perceptron Neural Networks to Input Perturbation

In this work, we study the statistical output sensitivity measure of a trained single layer preceptron neural network to input perturbation. This quantitative measure computes the expectation of absolute output deviations due to input perturbation with respect to all possible inputs. This is an important first step to the study of the statistical output sensitivity measure of multilayer perceptron neural networks. The major contribution of this work is the relaxation of the restriction of the input having uniform distributions in our early studies. Therefore, the novel sensitivity measure is applicable to real world applications such as machine learning problems. Furthermore, experimental results show that the new sensitivity measure is suitable to the networks with large input dimension

[1]  Bernard Widrow,et al.  Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.

[2]  Daniel S. Yeung,et al.  Sensitivity analysis of neocognitron , 1999, IEEE Trans. Syst. Man Cybern. Part C.

[3]  Vincenzo Piuri,et al.  Sensitivity to errors in artificial neural networks: a behavioral approach , 1994, Proceedings of IEEE International Symposium on Circuits and Systems - ISCAS '94.

[4]  Daniel S. Yeung,et al.  Using function approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function , 2002, IEEE Trans. Neural Networks.

[5]  Daniel S. Yeung,et al.  Localized Generalization Error of Gaussian-based Classifiers and Visualization of Decision Boundaries , 2006, Soft Comput..

[6]  Steve W. Piche,et al.  The selection of weight accuracies for Madalines , 1995, IEEE Trans. Neural Networks.

[7]  Daniel S. Yeung,et al.  A Quantified Sensitivity Measure for Multilayer Perceptron to Input Perturbation , 2003, Neural Computation.

[8]  Kang Zhang,et al.  Computation of Adalines' sensitivity to weight perturbation , 2006, IEEE Transactions on Neural Networks.

[9]  S. Hashem,et al.  Sensitivity analysis for feedforward artificial neural networks with differentiable activation functions , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[10]  Daniel S. Yeung,et al.  Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure , 2006, Neurocomputing.

[11]  D. S. Yeung,et al.  Localized generalization error and its application to RBFNN training , 2005, 2005 International Conference on Machine Learning and Cybernetics.

[12]  Daniel S. Yeung,et al.  Computation of Madalines' Sensitivity to Input and Weight Perturbations , 2006, Neural Computation.

[13]  Wing W. Y. Ng,et al.  Selection of weight quantisation accuracy for radial basis function neural network using stochastic sensitivity measure , 2003 .