Improving the Performance of Feedforward Neural Networks by Noise Injection into Hidden Neurons

The generalization ability of feedforward neural networks (NNs) depends on the size of training set and the feature of the training patterns. Theoretically the best classification property is obtained if all possible patterns are used to train the network, which is practically impossible. In this paper a new noise injection technique is proposed, that is noise injection into the hidden neurons at the summation level. Assuming that the test patterns are drawn from the same population used to generate the training set, we show that noise injection into hidden neurons is equivalent to training with noisy input patterns (i.e., larger training set). The simulation results indicate that the networks trained with the proposed technique and the networks trained with noisy input patterns have almost the same generalization and fault tolerance abilities. The learning time required by the proposed method is considerably less than that required by the training with noisy input patterns, and it is almost the same as that required by the standard backpropagation using normal input patterns.

[1]  Hideo Ito,et al.  Fault tolerant design using error correcting code for multilayer neural networks , 1994, IEEE International Workshop on Defect and Fault Tolerance in VLSI Systems.

[2]  Dhananjay S. Phatak,et al.  Complete and partial fault tolerance of feedforward neural nets , 1995, IEEE Trans. Neural Networks.

[3]  G. Bolt,et al.  Fault models for artificial neural networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[4]  Chita R. Das,et al.  A probabilistic model for the fault tolerance of multilayer perceptrons , 1996, IEEE Trans. Neural Networks.

[5]  Jos Nijhuis,et al.  Limits to the fault-tolerance of a feedforward neural network with learning , 1990, [1990] Digest of Papers. Fault-Tolerant Computing: 20th International Symposium.

[6]  Hideo Ito,et al.  On the Activation Function and Fault Tolerance in Feedforward Neural Networks , 1998 .

[7]  Etienne Barnard,et al.  Avoiding false local minima by proper initialization of connections , 1992, IEEE Trans. Neural Networks.

[8]  Robert I. Damper,et al.  Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application , 1993, IEEE Trans. Neural Networks.

[9]  Christopher M. Bishop,et al.  Current address: Microsoft Research, , 2022 .

[10]  Jin Wang,et al.  Weight smoothing to improve network generalization , 1994, IEEE Trans. Neural Networks.

[11]  A Learning Algorithm for Fault Tolerant Feedforward Neural Networks , 1996 .

[12]  Alan F. Murray,et al.  Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training , 1994, IEEE Trans. Neural Networks.