The efficient design of fault-tolerant artificial neural networks

This paper focuses on the study of efficient fault-tolerant design methods for an artificial neural network (ANN) implemented on a digital VLSI chip. Due to the high fault-tolerant capability of biological neural networks, it is widely taken for granted that ANNs should also be fault-tolerant. However, if a faulty neuron or a faulty link occurs in an ANN currently used in engineering fields, typically the ANN will no longer carry out the specified performance. The ability of ANN to achieve fault-tolerance, is not inherent, but must be built in. Also, the built-in fault-tolerant mechanism must be practical and efficient enough for VLSI chip implementation. In this paper, the partial retraining (PR) scheme is proposed as a design method to achieve fault-tolerance in ANN. The PR scheme is applied to only each single neuron which is affected by the hardware fault, not an entire multilayer network. Therefore, the convergence speed of the PR will be much faster than that of the normal learning of the entire multilayer network. Furthermore, the PR can be executed parallelly. We applied the PR scheme to a large scale ANN for face image recognition.

[1]  James L. McClelland,et al.  A simulation-based tutorial system for exploring parallel distributed processing , 1988 .

[2]  Chidchanok Lursinsap,et al.  Weight shifting techniques for self-recovery neural networks , 1994, IEEE Trans. Neural Networks.

[3]  C. H. Sequin,et al.  Fault tolerance in artificial neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.