Distributed fault tolerance in optimal interpolative nets

The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault tolerance is often taken for granted in neural networks rather than being explicitly accounted for in the architecture or learning algorithm. In addition, when fault tolerance is considered, it is often accounted for using an unrealistic fault model (e.g., neurons that are stuck on or off rather than small weight perturbations). Realistic fault tolerance can be achieved through a smooth distribution of weights, resulting in low weight salience and distributed computation. Results of trained OI Nets on the Iris classification problem show that fault tolerance can be increased with the algorithm presented in this paper.

[1]  Alan F. Murray,et al.  Can deterministic penalty terms model the effects of synaptic weight noise on network fault-tolerance? , 1995, Int. J. Neural Syst..

[2]  Robert J. Marks,et al.  Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter , 1995, IEEE Trans. Neural Networks.

[3]  Elie Bienenstock,et al.  Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.

[4]  P. Kumar,et al.  Theory and practice of recursive identification , 1985, IEEE Transactions on Automatic Control.

[5]  Kiyotoshi Matsuoka,et al.  Noise injection into inputs in back-propagation learning , 1992, IEEE Trans. Syst. Man Cybern..

[6]  B. Anderson,et al.  Linear Optimal Control , 1971 .

[7]  P. J. Edwards,et al.  Penalty terms for fault tolerance , 1997, Proceedings of International Conference on Neural Networks (ICNN'97).

[8]  C. H. Sequin,et al.  Fault tolerance in artificial neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[9]  Gregory J. Wolff,et al.  Optimal Brain Surgeon and general network pruning , 1993, IEEE International Conference on Neural Networks.

[10]  Paolo Ienne,et al.  GENES IV: A bit-serial processing element for a multi-model neural-network accelerator , 1993, J. VLSI Signal Process..

[11]  Chalapathy Neti,et al.  Maximally fault tolerant neural networks , 1992, IEEE Trans. Neural Networks.

[12]  Rui J. P. de Figueiredo,et al.  An evolution-oriented learning algorithm for the optimal interpolative net , 1992, IEEE Trans. Neural Networks.

[13]  Charles R. Johnson,et al.  Matrix analysis , 1985, Statistical Inference for Engineers and Data Scientists.

[14]  Rui J. P. de Figueiredo,et al.  Efficient learning procedures for optimal interpolative nets , 1993, Neural Networks.

[15]  R. J. P. de Figueiredo,et al.  An optimal matching-score net for pattern classification , 1990, IJCNN.

[16]  H.J. Chizeck,et al.  Recursive parameter identification of constrained systems: an application to electrically stimulated muscle , 1991, IEEE Transactions on Biomedical Engineering.

[17]  Ehud D. Karnin,et al.  A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.

[18]  Ramalingam Sridhar,et al.  VLSI neural network architectures , 1993, Sixth Annual IEEE International ASIC Conference and Exhibit.

[19]  Dan Simon,et al.  Fault-tolerant training for optimal interpolative nets , 1995, IEEE Trans. Neural Networks.

[20]  Alan F. Murray,et al.  Analogue imprecision in MLP training , 1996 .

[21]  James M. Keller,et al.  Will the real iris data please stand up? , 1999, IEEE Trans. Fuzzy Syst..