A probabilistic model for the fault tolerance of multilayer perceptrons

This paper presents a theoretical approach to determine the probability of misclassification of the multilayer perceptron (MLP) neural model, subject to weight errors. The type of applications considered are classification/recognition tasks involving binary input-output mappings. The analytical models are validated via simulation of a small illustrative example. The theoretical results, in agreement with simulation results, show that, for the example considered, Gaussian weight errors of standard deviation up to 22% of the weight value can be tolerated. The theoretical method developed here adds predictability to the fault tolerance capability of neural nets and shows that this capability is heavily dependent on the problem data.

[1]  Eduardo D. Sontag,et al.  Remarks on Interpolation and Recognition Using Neural Nets , 1990, NIPS.

[2]  Marwan A. Jabri,et al.  Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks , 1992, IEEE Trans. Neural Networks.

[3]  Bradley W. Dickinson,et al.  Trellis codes, receptive fields, and fault tolerant, self-repairing neural networks , 1990, IEEE Trans. Neural Networks.

[4]  Barry W. Johnson,et al.  Modeling of fault tolerance in neural networks , 1989, 15th Annual Conference of IEEE Industrial Electronics Society.

[5]  Wojciech Maly,et al.  Physically realistic fault models for analog CMOS neural networks , 1991 .

[6]  Chalapathy Neti,et al.  Maximally fault tolerant neural networks , 1992, IEEE Trans. Neural Networks.

[7]  Chita R. Das,et al.  Performance of multilayer neural networks in binary-to-binary mappings under weight errors , 1993, IEEE International Conference on Neural Networks.

[8]  Harry Wechsler,et al.  Distributed and Fault-Tolerant Computation for Retrieval Tasks Using Distributed Associative Memories , 1988, IEEE Trans. Computers.

[9]  Bernard Widrow,et al.  Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.

[10]  Chong-Ho Choi,et al.  Sensitivity analysis of multilayer perceptron with differentiable activation functions , 1992, IEEE Trans. Neural Networks.

[11]  B. E. Segee,et al.  Fault tolerance of pruned multilayer networks , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.

[12]  Michael C. Mozer,et al.  Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment , 1988, NIPS.

[13]  C. H. Sequin,et al.  Fault tolerance in artificial neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[14]  Dhananjay S. Phatak,et al.  Fault tolerance of feedforward neural nets for classification tasks , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[15]  G. Bolt,et al.  Fault models for artificial neural networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[16]  Robert I. Damper,et al.  Fault tolerance and redundancy of neural nets for the classification of acoustic data , 1991, [Proceedings] ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing.