Sensitivity study of Binary Feedforward Neural Networks

Abstract This paper presents a novel and effective approach for establishing a quantified output sensitivity of Binary Feedforward Neural Networks to weight and input perturbations. Firstly, analytical formulae are derived for computing a neuron׳s sensitivity by means of matrix and probability theories. Then, based on the neuron׳s sensitivity and the network׳s architecture feature, a bottom-up strategy is followed to compute the entire network׳s sensitivity. The proposed approach has the obvious advantages of higher generality, lower computational complexity, and yet much higher accuracy. Experimental results verify the correctness and effectiveness of the approach.

[1]  Daniel S. Yeung,et al.  Using function approximation to analyze the sensitivity of MLP with antisymmetric squashing activation function , 2002, IEEE Trans. Neural Networks.

[2]  Daming Shi,et al.  Sensitivity analysis applied to the construction of radial basis function networks , 2005, Neural Networks.

[3]  Steve W. Piche,et al.  The selection of weight accuracies for Madalines , 1995, IEEE Trans. Neural Networks.

[4]  John B. Shoven,et al.  I , Edinburgh Medical and Surgical Journal.

[5]  Filson Henry Glanz,et al.  Statistical extrapolation in certain adaptive pattern-recognition systems , 1965 .

[6]  Daniel S. Yeung,et al.  A Quantified Sensitivity Measure for Multilayer Perceptron to Input Perturbation , 2003, Neural Computation.

[7]  Jing Yang,et al.  Computation of multilayer perceptron sensitivity to input perturbation , 2013, Neurocomputing.

[8]  Wing W. Y. Ng,et al.  Selection of weight quantisation accuracy for radial basis function neural network using stochastic sensitivity measure , 2003 .

[9]  Jing Shao,et al.  A sensitivity-based approach for pruning architecture of Madalines , 2008, Neural Computing and Applications.

[10]  Sammy Siu,et al.  Sensitivity Analysis of the Split-Complex Valued Multilayer Perceptron Due to the Errors of the i.i.d. Inputs and Weights , 2007, IEEE Transactions on Neural Networks.

[11]  Bernard Widrow,et al.  Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.

[12]  Daniel S. Yeung,et al.  Sensitivity analysis of neocognitron , 1999, IEEE Trans. Syst. Man Cybern. Part C.

[13]  Vincenzo Piuri,et al.  Sensitivity to errors in artificial neural networks: a behavioral approach , 1994, Proceedings of IEEE International Symposium on Circuits and Systems - ISCAS '94.

[15]  Andries Petrus Engelbrecht,et al.  A new pruning heuristic based on variance analysis of sensitivity information , 2001, IEEE Trans. Neural Networks.

[16]  Chong-Ho Choi,et al.  Sensitivity analysis of multilayer perceptron with differentiable activation functions , 1992, IEEE Trans. Neural Networks.

[17]  Daniel S. Yeung,et al.  Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure , 2006, Neurocomputing.

[18]  Daniel S. Yeung,et al.  Localized Generalization Error Model and Its Application to Architecture Selection for Radial Basis Function Neural Network , 2007, IEEE Transactions on Neural Networks.

[19]  Daniel S. Yeung,et al.  Computation of Madalines' Sensitivity to Input and Weight Perturbations , 2006, Neural Computation.

[20]  Sang-Hoon Oh,et al.  Sensitivity analysis of single hidden-layer neural networks with threshold functions , 1995, IEEE Trans. Neural Networks.

[21]  Shengli Wu,et al.  Sensitivity-Based Adaptive Learning Rules for Binary Feedforward Neural Networks , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[22]  Daniel S. Yeung,et al.  Sensitivity analysis of multilayer perceptron to input and weight perturbations , 2001, IEEE Trans. Neural Networks.

[23]  David G. Stork,et al.  Pattern Classification , 1973 .

[24]  Ignacio Rojas,et al.  A Quantitative Study of Fault Tolerance, Noise Immunity, and Generalization Ability of MLPs , 2000, Neural Computation.

[25]  James N. Lyness,et al.  Notes on the Adaptive Simpson Quadrature Routine , 1969, J. ACM.

[26]  Patrick P. K. Chan,et al.  Radial Basis Function network learning using localized generalization error bound , 2009, Inf. Sci..

[27]  N. L. Johnson,et al.  Continuous Univariate Distributions. , 1995 .

[28]  Ignacio Rojas,et al.  Improving the tolerance of multilayer perceptrons by minimizing the statistical sensitivity to weight deviations , 2000, Neurocomputing.

[29]  G. Marsaglia Ratios of Normal Variables and Ratios of Sums of Uniform Variables , 1965 .

[30]  Kang Zhang,et al.  Computation of Adalines' sensitivity to weight perturbation , 2006, IEEE Transactions on Neural Networks.

[31]  Jacek M. Zurada,et al.  Perturbation method for deleting redundant inputs of perceptron networks , 1997, Neurocomputing.

[32]  Iveta Mr A New Sensitivity-Based Pruning Technique for Feed-Forward Neural Networks That Improves Generalization , 2011 .

[33]  Sammy Siu,et al.  Computing and Analyzing the Sensitivity of MLP Due to the Errors of the i.i.d. Inputs and Weights Based on CLT , 2010, IEEE Transactions on Neural Networks.

[34]  R. Ash,et al.  Probability and measure theory , 1999 .

[35]  C. Nicholson,et al.  THE PROBABILITY INTEGRAL FOR TWO VARIABLES , 1943 .