A Fault-Tolerant Regularizer for RBF Networks

In classical training methods for node open fault, we need to consider many potential faulty networks. When the multinode fault situation is considered, the space of potential faulty networks is very large. Hence, the objective function and the corresponding learning algorithm would be computationally complicated. This paper uses the Kullback-Leibler divergence to define an objective function for improving the fault tolerance of radial basis function (RBF) networks. With the assumption that there is a Gaussian distributed noise term in the output data, a regularizer in the objective function is identified. Finally, the corresponding learning algorithm is developed. In our approach, the objective function and the learning algorithm are computationally simple. Compared with some conventional approaches, including weight-decay-based regularizers, our approach has a better fault-tolerant ability. Besides, our empirical study shows that our approach can improve the generalization ability of a fault-free RBF network.

[1]  Yunqian Ma,et al.  Multiple model regression estimation , 2005, IEEE Transactions on Neural Networks.

[2]  Jim Austin,et al.  Fault Tolerant Multi-Layer Perceptron Networks , 1992 .

[3]  Alexander J. Smola,et al.  Support Vector Method for Function Approximation, Regression Estimation and Signal Processing , 1996, NIPS.

[4]  Chee Peng Lim,et al.  A hybrid neural network model for noisy data regression , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[5]  Salvatore Cavalieri,et al.  A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks , 1999, Neural Networks.

[6]  Christopher M. Bishop,et al.  Current address: Microsoft Research, , 2022 .

[7]  Zhi-Hua Zhou,et al.  Evolving Fault-Tolerant Neural Networks , 2003, Neural Computing & Applications.

[8]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.

[9]  Naotake Kamiura,et al.  An improvement in weight-fault tolerance of feedforward neural networks , 2001, Proceedings 10th Asian Test Symposium.

[10]  Dhananjay S. Phatak,et al.  Synthesis of fault tolerant neural networks , 2002, Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290).

[11]  Xia Hong A fast identification algorithm for box-cox transformation based radial basis function neural network , 2006, IEEE Trans. Neural Networks.

[12]  Sameer Singh Noise impact on time-series forecasting using an intelligent pattern matching technique , 1999, Pattern Recognit..

[13]  Alan F. Murray,et al.  Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training , 1994, IEEE Trans. Neural Networks.

[14]  Kwok-Wo Wong,et al.  Generalized RLS approach to the training of neural networks , 2006, IEEE Trans. Neural Networks.

[15]  Sheng Chen,et al.  Sparse modeling using orthogonal forward regression with PRESS statistic and regularization , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[16]  Sheng Chen,et al.  Local regularization assisted orthogonal least squares regression , 2006, Neurocomputing.

[17]  Bor-Shing Lin,et al.  Higher-Order-Statistics-Based Radial Basis Function Networks for Signal Enhancement , 2007, IEEE Transactions on Neural Networks.

[18]  Dhananjay S. Phatak Relationship between fault tolerance, generalization and the Vapnik-Chervonenkis (VC) dimension of feedforward ANNs , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[19]  Zhi-Hua Zhou,et al.  Improving tolerance of neural networks against multi-node open fault , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).

[20]  Chilukuri K. Mohan,et al.  Modifying training algorithms for improved fault tolerance , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).

[21]  Michael E. Tipping The Relevance Vector Machine , 1999, NIPS.

[22]  G. Bolt,et al.  Fault models for artificial neural networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[23]  Andrew Chi-Sing Leung,et al.  On the regularization of forgetting recursive least square , 1999, IEEE Trans. Neural Networks.

[24]  Klaus-Robert Müller,et al.  Asymptotic statistical theory of overtraining and cross-validation , 1997, IEEE Trans. Neural Networks.

[25]  Andrew Chi-Sing Leung,et al.  Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks , 2001, IEEE Trans. Neural Networks.

[26]  David J. C. MacKay,et al.  A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.

[27]  John Moody,et al.  Note on generalization, regularization and architecture selection in nonlinear learning systems , 1991, Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop.

[28]  Dhananjay S. Phatak,et al.  Complete and partial fault tolerance of feedforward neural nets , 1995, IEEE Trans. Neural Networks.

[29]  D. Mackay,et al.  A Practical Bayesian Framework for Backprop Networks , 1991 .

[30]  Mark J. L. Orr,et al.  Regularization in the Selection of Radial Basis Function Centers , 1995, Neural Computation.

[31]  Robert I. Damper,et al.  Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application , 1993, IEEE Trans. Neural Networks.

[32]  Dan Simon,et al.  Fault-tolerant training for optimal interpolative nets , 1995, IEEE Trans. Neural Networks.

[33]  Lizhong Wu,et al.  A Smoothing Regularizer for Feedforward and Recurrent Neural Networks , 1996, Neural Computation.

[34]  Aristidis Likas,et al.  An incremental training method for the probabilistic RBF network , 2006, IEEE Trans. Neural Networks.

[35]  Chalapathy Neti,et al.  Maximally fault tolerant neural networks , 1992, IEEE Trans. Neural Networks.