Robustness of feedforward neural networks

Designing dense, high speed, feedforward neural networks requires an understanding of the consequences of using simple neurons with significant input and weights errors. To develop a generalized understanding of these consequences, independent of the choice of inputs and weights, an analysis is presented of a general class of Madalines, i.e., those with random inputs and weights. Using a stochastic model for input and weight errors, simple analytical expressions for the output error variance of feedforward neural networks composed of sigmoidal, threshold or linear units are derived. These expressions show that the gain in error from input to output in any layer of a Madaline is greater than one. Madalines are sensitive to implementation errors, and in this sense are not inherently robust.<<ETX>>