Direct Approaches to Improving the Robustness of Multilayer Neural Networks

Multilayer neural networks trained with backpropagation are in general not robust against the loss of a hidden neuron. In this paper we define a form of robustness called 1-node robustness and propose methods to improve it. One approach is based on modification of the error function by the addition of a ``robustness error''. It leads to more robust networks but at the cost of a reduced accuracy. A second approach, ``pruning-and-duplication'', consists of duplicating the neurons whose loss is the most damaging for the network. Pruned neurons are used for the duplication. This procedure leads to robust and accurate networks at low computational cost. It may also prove beneficial for generalisation.

[1]  Chalapathy Neti,et al.  Maximally fault tolerant neural networks , 1992, IEEE Trans. Neural Networks.

[2]  B. E. Segee,et al.  Fault tolerance of pruned multilayer networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[3]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[4]  Thomas P. Vogl,et al.  Rescaling of variables in back propagation learning , 1991, Neural Networks.

[5]  D. Harris,et al.  Creating robust networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[6]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .