Enhancing both generalization and fault tolerance of multilayer neural networks

In this paper, we propose the method to enhance both generalization ability and fault tolerance of multilayer neural networks. Many methods that enhance either generalization ability or fault tolerance have been proposed, but very few methods enhance both of them. We discuss the combination of the method for good generalization and the method for high fault tolerance. Avoiding the interference, we propose local augmentation method (LAUG) to enhance fault tolerance. It duplicates hidden units according to the importance of each unit. Since it manipulates a trained network keeping the input-output relation of the network, LAUG does not interfere with any training algorithms to enhance generalization ability. Finally, we show the effectiveness of our method through some experiments.

[1]  Masumi Ishikawa,et al.  Structural learning with forgetting , 1996, Neural Networks.

[2]  A Learning Algorithm for Fault Tolerant Feedforward Neural Networks , 1996 .

[3]  Robert I. Damper,et al.  Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application , 1993, IEEE Trans. Neural Networks.

[4]  Haruhiko Takase,et al.  Effect of regularization term upon fault tolerant training , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[5]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[6]  Russell Reed,et al.  Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.

[7]  G. Bolt,et al.  Fault models for artificial neural networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.