Feedforward neural networks with random weights

In the field of neural network research a number of experiments described seem to be in contradiction with the classical pattern recognition or statistical estimation theory. The authors attempt to give some experimental understanding why this could be possible by showing that a large fraction of the parameters (the weights of neural networks) are of less importance and do not need to be measured with high accuracy. The remaining part is capable to implement the desired classifier and because this is only a small fraction of the total number of weights, the reported experiments seem to be more realistic from a classical point of view.<<ETX>>