Computation of two-layer perceptron networks’ sensitivity to input perturbation

The sensitivity of a neural networkpsilas output to its input perturbation is an important measure for evaluating the networkpsilas performance. In this paper we propose a novel method to quantify the sensitivity of a two-layer perceptron network (TLPN). The sensitivity is defined as the mathematical expectation of absolute output deviations due to input perturbations with respect to all possible inputs. In our method a bottom-up way is followed, in which the sensitivity of a neuron is first considered and then is that of the entire network. The main contribution of the method is that it requests a weak assumption on the input, that is its elements need only to be independent identically distributed, and thus is more practical to real applications. Some experiments have been conducted, and the results demonstrate high accuracy and efficiency of the method.

[1]  Daniel S. Yeung,et al.  Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure , 2006, Neurocomputing.

[2]  Vincenzo Piuri,et al.  Sensitivity to errors in artificial neural networks: a behavioral approach , 1995 .

[3]  Daniel S. Yeung,et al.  A Quantified Sensitivity Measure for Multilayer Perceptron to Input Perturbation , 2003, Neural Computation.

[4]  Steve W. Piche,et al.  The selection of weight accuracies for Madalines , 1995, IEEE Trans. Neural Networks.

[5]  Kang Zhang,et al.  Computation of Adalines' sensitivity to weight perturbation , 2006, IEEE Transactions on Neural Networks.

[6]  Daniel S. Yeung,et al.  Using function approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function , 2002, IEEE Trans. Neural Networks.

[7]  W.W.Y. Ng,et al.  Statistical Sensitivity Measure of Single Layer Perceptron Neural Networks to Input Perturbation , 2006, 2006 International Conference on Machine Learning and Cybernetics.

[8]  Daniel S. Yeung,et al.  Computation of Madalines' Sensitivity to Input and Weight Perturbations , 2006, Neural Computation.

[9]  Wing W. Y. Ng,et al.  Selection of weight quantisation accuracy for radial basis function neural network using stochastic sensitivity measure , 2003 .

[10]  Bernard Widrow,et al.  Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.

[11]  Daniel S. Yeung,et al.  Sensitivity analysis of neocognitron , 1999, IEEE Trans. Syst. Man Cybern. Part C.

[12]  W. Gander,et al.  Adaptive Quadrature—Revisited , 2000 .

[13]  Daniel S. Yeung,et al.  Sensitivity analysis of multilayer perceptron to input and weight perturbations , 2001, IEEE Trans. Neural Networks.