Analysis on Generalization Error of Faulty RBF Networks with Weight Decay Regularizer

In the past two decades, the use of the weight decay regularizer for improving the generalization ability of neural networks has been extensively investigated. However, most existing results focus on the fault-free neural networks only. This papers extends the analysis on the generalization ability for networks with multiplicative weight noise. Our analysis result allows us not only to estimate the generalization ability of a faulty network, but also to select a good model from various settings. Simulated experiments are performed to verify theoretical result.

[1]  Shawki Areibi,et al.  The Impact of Arithmetic Representation on Implementing MLP-BP on FPGAs: A Study , 2007, IEEE Transactions on Neural Networks.

[2]  W. J. Studden,et al.  Theory Of Optimal Experiments , 1972 .

[3]  Ignacio Rojas,et al.  An Accurate Measure for Multilayer Perceptron Tolerance to Weight Deviations , 1999, Neural Processing Letters.

[4]  John Moody,et al.  Note on generalization, regularization and architecture selection in nonlinear learning systems , 1991, Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop.

[5]  S. Himavathi,et al.  Feedforward Neural Network Implementation in FPGA Using Layer Multiplexing for Effective Resource Utilization , 2007, IEEE Transactions on Neural Networks.

[6]  Sheng Chen,et al.  Sparse modeling using orthogonal forward regression with PRESS statistic and regularization , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[7]  Toyohisa Kaneko,et al.  Effect of Coefficient Rounding in Floating-Point Digital Filters , 1971, IEEE Transactions on Aerospace and Electronic Systems.

[8]  Andrew Chi-Sing Leung,et al.  On the regularization of forgetting recursive least square , 1999, IEEE Trans. Neural Networks.