Fault tolerance of pruned multilayer networks

Techniques for dynamically reducing the size of a neural network during learning have been found by some investigators to speed up learning convergence and improve network generalization. However, concern arises about the fault sensitivity of the pruned network relative to that of its parent. Work has been done to assess the tolerance of multilayer feedforward networks to the zeroing of individual weights, and to determine if network pruning during learning affects this tolerance. Multilayer networks having a single input and a single output were trained to produce the sine of the input value on the interval (- pi , pi ). Identical networks with identical initial weights were then trained using the skeletonization technique of Mozer and Smolensky (1989). Each weight in these networks was zeroed in turn, and the effect on the RMS approximation error was noted. Surprisingly, the unpruned networks, which had considerably more free parameters, were found to be no more tolerant to weight zeroing than the pruned networks, and maintaining a separate relevance estimate for each node was found to be unnecessary.<<ETX>>