A functional manipulation for improving tolerance against multiple-valued weight faults of feedforward neural networks

In this paper we propose feedforward neural networks (NNs for short) tolerating multiple-valued stuck-at faults of connection weights. To improve the fault tolerance against faults with small false absolute values, we employ the activation function with the relatively gentle gradient for the last layer, and steepen the gradient of the function in the intermediate layer. For faults with large false absolute values, the function working as filter inhibits their influence by setting products of inputs and faulty weights to allowable values. The experimental results show that our NN is superior in fault tolerance and learning time to other NNs employing approaches based on fault injection, forcible weight limit and so forth.

[1]  Robert I. Damper,et al.  Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application , 1993, IEEE Trans. Neural Networks.

[2]  Kishan G. Mehrotra,et al.  Training techniques to obtain fault-tolerant neural networks , 1994, Proceedings of IEEE 24th International Symposium on Fault- Tolerant Computing.

[3]  D. Signorini,et al.  Neural networks , 1995, The Lancet.

[4]  Tadashi Shibata,et al.  An excellent weight-updating-linearity EEPROM synapse memory cell for self-learning Neuron-MOS neural networks , 1993 .

[5]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[6]  Mohamed I. Elmasry,et al.  The digi-neocognitron: a digital neocognitron neural network model for VLSI , 1992, IEEE Trans. Neural Networks.

[7]  Yasuo Tan,et al.  A fault-tolerant multilayer neural network model and its properties , 1994, Systems and Computers in Japan.

[8]  Takehiro Ito,et al.  On fault injection approaches for fault tolerance of feedforward neural networks , 1997, Proceedings Sixth Asian Test Symposium (ATS'97).

[9]  T. Ohmi,et al.  Hardware-backpropagation learning of neuron MOS neural networks , 1992, 1992 International Technical Digest on Electron Devices Meeting.

[10]  R. Pinkham,et al.  An 11-million Transistor Neural Network Execution Engine , 1991, 1991 IEEE International Solid-State Circuits Conference. Digest of Technical Papers.

[11]  Yoshihiro Tohma,et al.  Improvement of MTTF of feedforward neural networks by applying relearning , 2002 .

[12]  Jos Nijhuis,et al.  Limits to the fault-tolerance of a feedforward neural network with learning , 1990, [1990] Digest of Papers. Fault-Tolerant Computing: 20th International Symposium.

[13]  Alan F. Murray,et al.  Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training , 1994, IEEE Trans. Neural Networks.

[14]  Yutaka Hata,et al.  Activation function manipulation for fault tolerant feedforward neural networks , 1999, Proceedings Eighth Asian Test Symposium (ATS'99).

[15]  N. Kamiura On a Weight Limit Approach for Enhancing Fault Tolerance of Feedforward Neural Networks , 2000 .

[16]  A Learning Algorithm for Fault Tolerant Feedforward Neural Networks , 1996 .

[17]  Itsuo Takanami,et al.  A fault-value injection approach for multiple-weight-fault tolerance of MNNs , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.

[18]  Alan F. Murray,et al.  Asynchronous VLSI neural networks using pulse-stream arithmetic , 1988 .