Improving the tolerance of multilayer perceptrons by minimizing the statistical sensitivity to weight deviations

Abstract This paper proposes a version of the backpropagation algorithm which increases the tolerance of a feedforward neural network against deviations in the weight values. These changes can originate either when the neural network is mapped on a given VLSI circuit where the precision and/or weight matching are low, or by physical defects affecting the neural circuits. The modified backpropagation algorithm we propose uses the statistical sensitivity of the network to changes in the weights as a quantitative measure of network tolerance and attempts to reduce this statistical sensitivity while keeping the figures for the usual training performance (in errors and time) similar to those obtained with the usual backpropagation algorithm.

[1]  Tomaso A. Poggio,et al.  Regularization Theory and Neural Networks Architectures , 1995, Neural Computation.

[2]  Chong-Ho Choi,et al.  Sensitivity analysis of multilayer perceptron with differentiable activation functions , 1992, IEEE Trans. Neural Networks.

[3]  Alan F. Murray,et al.  Can deterministic penalty terms model the effects of synaptic weight noise on network fault-tolerance? , 1995, Int. J. Neural Syst..

[4]  Alan F. Murray,et al.  Weight Saliency Regularisation in Augmented Networks , 1998 .

[5]  Chalapathy Neti,et al.  Maximally fault tolerant neural networks , 1992, IEEE Trans. Neural Networks.

[6]  Samir I. Shaheen,et al.  Performance evaluation of a novel fault tolerance training algorithm , 1994, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94).

[7]  Rory A. Fisher,et al.  THE COMPARISON OF SAMPLES WITH POSSIBLY UNEQUAL VARIANCES , 1939 .

[8]  Li-Xin Wang,et al.  Adaptive fuzzy systems and control - design and stability analysis , 1994 .

[9]  R. Lippmann,et al.  An introduction to computing with neural nets , 1987, IEEE ASSP Magazine.

[10]  Li-Xin Wang,et al.  Adaptive fuzzy systems and control , 1994 .

[11]  Chun-Shin Lin,et al.  Maximizing fault tolerance in multilayer neural networks , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[12]  C. G. Hilborn,et al.  The Condensed Nearest Neighbor Rule , 1967 .

[13]  Dhananjay S. Phatak,et al.  Complete and partial fault tolerance of feedforward neural nets , 1995, IEEE Trans. Neural Networks.

[14]  Ali A. Minai,et al.  Perturbation response in feed-forward neural networks , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[15]  Paul W. Munro,et al.  Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes , 1992, NIPS.

[16]  P. Erdös,et al.  Interpolation , 1953, An Introduction to Scientific, Symbolic, and Graphical Computation.

[17]  Robert J. Hammell,et al.  Interpolation, Completion, and Learning Fuzzy Rules , 1994, IEEE Trans. Syst. Man Cybern. Syst..

[18]  Alberto Prieto,et al.  A Modified Backpropagation Algorithm to Tolerate Weight Errors , 1997, IWANN.

[19]  Alan F. Murray,et al.  Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training , 1994, IEEE Trans. Neural Networks.

[20]  Michael J. Carter,et al.  Comparative Fault Tolerance of Parallel Distributed Processing Networks , 1994, IEEE Trans. Computers.

[21]  Chilukuri K. Mohan,et al.  Modifying training algorithms for improved fault tolerance , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[22]  Peter E. Hart,et al.  The condensed nearest neighbor rule (Corresp.) , 1968, IEEE Trans. Inf. Theory.

[23]  Bernard Widrow,et al.  Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.

[24]  Vincenzo Piuri,et al.  Sensitivity to errors in artificial neural networks: a behavioral approach , 1994, Proceedings of IEEE International Symposium on Circuits and Systems - ISCAS '94.

[25]  Heekuck Oh,et al.  Neural Networks for Pattern Recognition , 1993, Adv. Comput..