Relationship between fault tolerance, generalization and the Vapnik-Chervonenkis (VC) dimension of feedforward ANNs

It is demonstrated that fault tolerance, generalization and the Vapnik-Chertonenkis (VC) dimension are inter-related attributes. It is well known that the generalization error if plotted as a function of the VC dimension h, exhibits a well defined minimum corresponding to an optimal value of h, say h/sub opt/. We show that if the VC dimension h of an ANN satisfies h/spl les/h/sub opt/ (i.e., there is no excess capacity or redundancy), then fault tolerance and generalization are mutually conflicting attributes. On the other hand, if h>h/sub opt/ (i.e., there is excess capacity or redundancy), then fault tolerance and generalization are mutually synergistic attributes. In other words, training methods geared towards improving the fault tolerance can also lead to better generalization and vice versa, only when there is excess capacity or redundancy. This is consistent with our previous results indicating that complete fault tolerance in ANNs requires a significant amount of redundancy.

[1]  Dhananjay S. Phatak,et al.  Fault tolerance of feedforward neural nets for classification tasks , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[2]  Dhananjay S. Phatak,et al.  Complete and partial fault tolerance of feedforward neural nets , 1995, IEEE Trans. Neural Networks.

[3]  Y. Tan,et al.  Fault-tolerant back-propagation model and its generalization ability , 1993, Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).

[4]  Hyeoncheol Kim,et al.  Generalization and fault tolerance in rule-based neural networks , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[5]  Chilukuri K. Mohan,et al.  Modifying training algorithms for improved fault tolerance , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[6]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[7]  Alan F. Murray,et al.  Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training , 1994, IEEE Trans. Neural Networks.

[8]  Vladimir Vapnik,et al.  Principles of Risk Minimization for Learning Theory , 1991, NIPS.