The paper deals with the problem of fault tolerance in multilayer networks. Although they already possess a reasonable capacity for fault recovery, it may be insufficient in particularly critical applications. Studies carried out by the authors have shown that traditional back-propagation algorithm may entail the presence of one or more weights with a much higher value than the others. A fault in that weight/s may therefore lead to a substantial decrease in the network’s fault tolerance. The authors propose a learning algorithm which updates the weights, distributing their values as uniformly as possible in each layer. Tests performed on benchmark test sets will show the considerable increase in fault tolerance obtainable with the proposed approach as compared with traditional algorithms, and with another approach to be found in literature.
[1]
Dhananjay S. Phatak,et al.
Complete and partial fault tolerance of feedforward neural nets
,
1995,
IEEE Trans. Neural Networks.
[2]
Bradley W. Dickinson,et al.
Trellis codes, receptive fields, and fault tolerant, self-repairing neural networks
,
1990,
IEEE Trans. Neural Networks.
[3]
Uwe Helmke,et al.
Existence and uniqueness results for neural network approximations
,
1995,
IEEE Trans. Neural Networks.
[4]
James L. McClelland,et al.
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
,
1986
.