Metrics for Weight Stuck-at-Zero Fault in Sigmoidal FFANNs

In this paper, a class of weight fault model known as single weight stuck-at-zero for a single hidden layer (with sigmoidal nodes) feed forward artificial neural networks is analyzed. Fault measures/metrics are derived for weight stuck-at-zero fault. Experiments are conducted for four function approximation tasks wherein a set of 30 networks are trained for each task. A network which has a least validation error is selected for further analysis of weight stuck-at fault for the four function approximation tasks. The average change in the prediction error on single fault seeding is measured and compared with the predicted fault measure. Correlation between the derived fault measures and empirical measure of single fault seeding demonstrate that the correlation is significant at 0.10 level for both the derived measures, for one o f the derived measures, the correlation is significant at 0.05 level. Thus, these two derived measures are shown to be good metrics for the measurement of the fault tolerance of the network to single weight fault, at least for the two function approximation tasks. Further experimentation is required to empirically assess the validity of these measures.

[1]  A. F. Murray,et al.  Fault tolerance via weight noise in analog VLSI implementations of MLPs-a case study with EPSILON , 1998 .

[2]  J. William Helton,et al.  NonlinearH∞ control theory for stable plants , 1992, Math. Control. Signals Syst..

[3]  Simone Orcioni,et al.  A Class of Neural Networks Based on Approximate Identity for Analog IC's Hardware Implementation , 1994 .

[4]  Andrew Chi-Sing Leung,et al.  On the Selection of Weight Decay Parameter for Faulty Networks , 2010, IEEE Transactions on Neural Networks.

[5]  Yogesh Singh,et al.  Fault tolerance of feedforward artificial neural networks- a framework of study , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..

[6]  Dhananjay S. Phatak,et al.  Complete and partial fault tolerance of feedforward neural nets , 1995, IEEE Trans. Neural Networks.

[7]  Gang Wei,et al.  Regularization Parameter Selection for Faulty Neural Networks , 2009 .

[8]  Simon Haykin,et al.  Neural Networks: A Comprehensive Foundation , 1998 .

[9]  Martin A. Riedmiller,et al.  A direct adaptive method for faster backpropagation learning: the RPROP algorithm , 1993, IEEE International Conference on Neural Networks.

[10]  Andrew Chi-Sing Leung,et al.  Effect of Input Noise and Output Node Stochastic on Wang's k WTA , 2013, IEEE Transactions on Neural Networks and Learning Systems.

[11]  Alan F. Murray,et al.  Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training , 1994, IEEE Trans. Neural Networks.

[12]  Amit Prakash Singh,et al.  Empirical Study Of FFANNs Tolerance To Weight Stuck At Zero Fault , 2010 .

[13]  Arthur D. Friedman,et al.  Fault detection in digital circuits , 1971 .

[14]  H. White,et al.  There exists a neural network that does not make avoidable mistakes , 1988, IEEE 1988 International Conference on Neural Networks.

[15]  Andrew Chi-Sing Leung,et al.  Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks , 2010, IEEE Transactions on Neural Networks.

[16]  Andrew Chi-Sing Leung,et al.  Fault Tolerant Regularizers for Multilayer Feedforward Networks , 2009, ICONIP.

[17]  Klaus-Robert Müller,et al.  Efficient BackProp , 2012, Neural Networks: Tricks of the Trade.

[18]  George Bolt,et al.  Investigating Fault Tolerance in ArtificialNeural Networks , 1991 .

[19]  Vladimir Cherkassky,et al.  Comparison of adaptive methods for function estimation from samples , 1996, IEEE Trans. Neural Networks.

[20]  Andrzej Rybarczyk,et al.  A neural network - hardware implementation using FPGA , 2003 .

[21]  Andrew Chi-Sing Leung,et al.  Kernel Width Optimization for Faulty RBF Neural Networks with Multi-node Open Fault , 2010, Neural Processing Letters.

[22]  S. D. Bedrosian,et al.  Color and language as man‐machine interface parameters , 1989, Int. J. Intell. Syst..

[23]  Andrew Chi-Sing Leung,et al.  RBF Networks Under the Concurrent Fault Situation , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[24]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[25]  Andrew Chi-Sing Leung,et al.  On-Line Node Fault Injection Training Algorithm for MLP Networks: Objective Function and Convergence Analysis , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[26]  Dias F. Morgado,et al.  Fault Tolerance of Artificial Neural Networks: an Open Discussion for a Global Model , 2010 .

[27]  Andrew Chi-Sing Leung,et al.  The effect of weight fault on associative networks , 2011, Neural Computing and Applications.

[28]  Yasar Becerikli,et al.  Neural Network Implementation in Hardware Using FPGAs , 2006, ICONIP.

[29]  Amit Prakash Singh,et al.  Fault Models for Neural Hardware , 2009, 2009 First International Conference on Advances in System Testing and Validation Lifecycle.

[30]  Ken-ichi Funahashi,et al.  On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.

[31]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[32]  J. I. Minnix Fault tolerance of the backpropagation neural network trained on noisy inputs , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.

[33]  A. K. Rigler,et al.  Accelerating the convergence of the back-propagation method , 1988, Biological Cybernetics.

[34]  Dhananjay S. Phatak,et al.  Investigating the Fault Tolerance of Neural Networks , 2005, Neural Computation.

[35]  Vincenzo Piuri,et al.  Analysis of Fault Tolerance in Artificial Neural Networks , 2001, J. Parallel Distributed Comput..