On Node-Fault-Injection Training of an RBF Network

While injecting fault during training has long been demonstrated as an effective method to improve fault tolerance of a neural network, not much theoretical work has been done to explain these results. In this paper, two different node-fault-injection-based on-line learning algorithms, including (1) injecting multinode fault during training and (2) weight decay with injecting multinode fault, are studied. Their almost sure convergence will be proved and thus their corresponding objective functions are deduced.

[1]  A Learning Algorithm for Fault Tolerant Feedforward Neural Networks , 1996 .

[2]  Chalapathy Neti,et al.  Maximally fault tolerant neural networks , 1992, IEEE Trans. Neural Networks.

[3]  T. Lai Stochastic approximation: invited paper , 2003 .

[4]  Chilukuri K. Mohan,et al.  Modifying training algorithms for improved fault tolerance , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[5]  T. Haruhiko,et al.  A study on the simple penalty term to the error function from the viewpoint of fault tolerant training , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[6]  Salvatore Cavalieri,et al.  A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks , 1999, Neural Networks.

[7]  Andrew Chi-Sing Leung,et al.  On Objective Function, Regularizer, and Prediction Error of a Learning Algorithm for Dealing With Multiplicative Weight Noise , 2009, IEEE Transactions on Neural Networks.

[8]  Yogesh Singh,et al.  Fault tolerance of feedforward artificial neural networks- a framework of study , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[9]  Guozhong An,et al.  The Effects of Adding Noise During Backpropagation Training on a Generalization Performance , 1996, Neural Computation.

[10]  Christopher M. Bishop,et al.  Current address: Microsoft Research, , 2022 .

[11]  Ignacio Rojas,et al.  Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization , 2000, Neural Processing Letters.

[12]  C. H. Sequin,et al.  Fault tolerance in artificial neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[13]  Dhananjay S. Phatak,et al.  Investigating the Fault Tolerance of Neural Networks , 2005, Neural Computation.

[14]  N. Kamiura On a Weight Limit Approach for Enhancing Fault Tolerance of Feedforward Neural Networks , 2000 .

[15]  E. G. Gladyshev On Stochastic Approximation , 1965 .

[16]  Andrew Chi-Sing Leung,et al.  On Weight-Noise-Injection Training , 2009, ICONIP.

[17]  Yves Grandvalet,et al.  Noise Injection: Theoretical Prospects , 1997, Neural Computation.

[18]  Yves Grandvalet,et al.  Comments on "Noise injection into inputs in back propagation learning" , 1995, IEEE Trans. Syst. Man Cybern..

[19]  Dhananjay S. Phatak,et al.  Complete and partial fault tolerance of feedforward neural nets , 1995, IEEE Trans. Neural Networks.

[20]  Robert J. Marks,et al.  Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter , 1995, IEEE Trans. Neural Networks.

[21]  Andrew Chi-Sing Leung,et al.  A Fault-Tolerant Regularizer for RBF Networks , 2008, IEEE Transactions on Neural Networks.