Robustness of feedforward neural networks
暂无分享,去创建一个
Designing dense, high speed, feedforward neural networks requires an understanding of the consequences of using simple neurons with significant input and weights errors. To develop a generalized understanding of these consequences, independent of the choice of inputs and weights, an analysis is presented of a general class of Madalines, i.e., those with random inputs and weights. Using a stochastic model for input and weight errors, simple analytical expressions for the output error variance of feedforward neural networks composed of sigmoidal, threshold or linear units are derived. These expressions show that the gain in error from input to output in any layer of a Madaline is greater than one. Madalines are sensitive to implementation errors, and in this sense are not inherently robust.<<ETX>>
[1] Chong-Ho Choi,et al. Sensitivity analysis of multilayer perceptron with differentiable activation functions , 1992, IEEE Trans. Neural Networks.
[2] Bernard Widrow,et al. Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.