SUMMARY A new learning algorithm is proposed to enhance fault tolerance ability of the feedforward neural networks. The algorithm focuses on the links (weights) that may cause errors at the output when they are open faults. The relevances of the synaptic weights to the output error (i.e. the sensitivity of the output error to the weight fault) are estimated in each training cycle of the standard backpropagation using the Taylor expansion of the output around fault-free weights. Then the weight giving the maximum relevance is decreased. The approach taken by the algorithm described in this paper is to prevent the weights from having large relevances. The simulation results indicate that the network trained with the proposed algorithm do have signicantly better fault tolerance than the network trained with the standard backpropagation algorithm. The simulation results show that the fault tolerance and the generalization abilities are improved. 1. Introduction Feedforward neural networks (NNs), trained with the backpropagation algorithm have been applied successfully in variety of diverse areas such as speech recognition , optical character recognition, control, and medical analysis [1]. The algorithm seeks to minimize the error in the output of NN as compared to a target, or desired response [2]. Although it was thought that NNs are fault tolerant as they consist of parallel processing elements, the existing learning algorithms do not make optimal use of redundant resources. Recently extensive research has proved that NNs are not intrinsically fault tolerant, and the fault tolerance has to be enhanced by adequate scheme [3], [4]. A number of methods have been proposed to enhance the fault tolerance ability of NNs. The in BLOCKINuence of learning rate, training time and training with noisy input data on the performance of the NNs under the existence of fault have been studied [3]. In [5] it was found that training on noisy input data also enhance the fault tolerance ability of NNs. The effect of analog noise injection on the synaptic weights during multilayer neural network training on the fault tolerance property was analyzed [6]. A procedures to build fault tolerant NNs by replicating the hidden units are presented [8],[12], and the minimum redundancy required to tolerate all possible single faults is analytically derived [12]. Using error correcting code, a fault tolerant design which can correct an error at the output layer neuron was presented [7]. A learning algorithm that minimizes the dierence between faulty and …
[1]
Y. Tan,et al.
Fault-tolerant back-propagation model and its generalization ability
,
1993,
Proceedings of 1993 International Conference on Neural Networks (IJCNN-93-Nagoya, Japan).
[2]
Edward A. Rietman,et al.
Back-propagation learning and nonidealities in analog neural network hardware
,
1991,
IEEE Trans. Neural Networks.
[3]
Hideo Ito,et al.
Fault tolerant design using error correcting code for multilayer neural networks
,
1994,
IEEE International Workshop on Defect and Fault Tolerance in VLSI Systems.
[4]
Jin Wang,et al.
Weight smoothing to improve network generalization
,
1994,
IEEE Trans. Neural Networks.
[5]
Robert I. Damper,et al.
Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application
,
1993,
IEEE Trans. Neural Networks.
[6]
Alan F. Murray,et al.
Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training
,
1994,
IEEE Trans. Neural Networks.
[7]
Sandro Ridella,et al.
Statistically controlled activation weight initialization (SCAWI)
,
1992,
IEEE Trans. Neural Networks.
[8]
Dhananjay S. Phatak,et al.
Complete and partial fault tolerance of feedforward neural nets
,
1995,
IEEE Trans. Neural Networks.
[9]
Peter J. W. Rayner,et al.
Generalization and PAC learning: some new results for the class of generalized single-layer networks
,
1995,
IEEE Trans. Neural Networks.
[10]
Kishan G. Mehrotra,et al.
Training techniques to obtain fault-tolerant neural networks
,
1994,
Proceedings of IEEE 24th International Symposium on Fault- Tolerant Computing.
[11]
Petri Koistinen,et al.
Using additive noise in back-propagation training
,
1992,
IEEE Trans. Neural Networks.