Generalization and fault tolerance in rule-based neural networks

How to obtain maximum generalization and fault-tolerance has been an important issue in designing a feedforward network. Research on rule-based neural networks suggests that generalization of a neural network is related to the directions of the pattern vectors encoded by hidden units, while fault-tolerance depends on the magnitudes of the weights. In this paper, a rule-based neural network is shown better than a standard neural network both in generalization and fault tolerance. In addition, a formal measure for evaluating network fault tolerance is introduced.<<ETX>>

[1]  Darrell Whitley,et al.  Generalization in feed forward neural networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[2]  B. E. Segee,et al.  Fault tolerance of pruned multilayer networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[3]  Anders Krogh,et al.  A Simple Weight Decay Can Improve Generalization , 1991, NIPS.

[4]  Li-Min Fu Knowledge-based connectionism for revising domain theories , 1993, IEEE Trans. Syst. Man Cybern..

[5]  Jim Austin,et al.  Fault Tolerant Multi-Layer Perceptron Networks , 1992 .

[6]  Javier R. Movellan,et al.  Benefits of gain: speeded learning and minimal hidden layers in back-propagation networks , 1991, IEEE Trans. Syst. Man Cybern..

[7]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[8]  Ferdinand Hergert,et al.  Improving model selection by nonconvergent methods , 1993, Neural Networks.

[9]  Chalapathy Neti,et al.  Maximally fault tolerant neural networks , 1992, IEEE Trans. Neural Networks.