On-Line Node Fault Injection Training Algorithm for MLP Networks: Objective Function and Convergence Analysis
暂无分享,去创建一个
[1] Kiyotoshi Matsuoka,et al. Noise injection into inputs in back-propagation learning , 1992, IEEE Trans. Syst. Man Cybern..
[2] Ignacio Rojas,et al. A Quantitative Study of Fault Tolerance, Noise Immunity, and Generalization Ability of MLPs , 2000, Neural Computation.
[3] Dhananjay S. Phatak,et al. Generalized haar DWT and transformations between decision trees and neural networks , 2006, IEEE Transactions on Neural Networks.
[4] N. Kamiura. On a Weight Limit Approach for Enhancing Fault Tolerance of Feedforward Neural Networks , 2000 .
[5] Andrew Chi-Sing Leung,et al. Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks , 2010, IEEE Transactions on Neural Networks.
[6] Andrew Chi-Sing Leung,et al. Training RBF network to tolerate single node fault , 2011, Neurocomputing.
[7] Andrew Chi-Sing Leung,et al. Analysis on Generalization Error of Faulty RBF Networks with Weight Decay Regularizer , 2009, ICONIP.
[8] C. Lee Giles,et al. An analysis of noise in recurrent neural networks: convergence and generalization , 1996, IEEE Trans. Neural Networks.
[9] Paul W. Munro,et al. Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes , 1992, NIPS.
[10] Chilukuri K. Mohan,et al. Modifying training algorithms for improved fault tolerance , 1994, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94).
[11] Andrew Chi-Sing Leung,et al. On Node-Fault-Injection Training of an RBF Network , 2009, ICONIP.
[12] Andrew Chi-Sing Leung,et al. A Fault-Tolerant Regularizer for RBF Networks , 2008, IEEE Transactions on Neural Networks.
[13] Alan F. Murray,et al. Synaptic weight noise during multilayer perceptron training: fault tolerance and training improvements , 1993, IEEE Trans. Neural Networks.
[14] Wei Wu,et al. Boundedness and Convergence of Online Gradient Method With Penalty for Feedforward Neural Networks , 2009, IEEE Transactions on Neural Networks.
[15] Chalapathy Neti,et al. Maximally fault tolerant neural networks , 1992, IEEE Trans. Neural Networks.
[16] Yves Grandvalet,et al. Comments on "Noise injection into inputs in back propagation learning" , 1995, IEEE Trans. Syst. Man Cybern..
[17] Alan F. Murray,et al. Enhanced MLP performance and fault tolerance resulting from synaptic weight noise during training , 1994, IEEE Trans. Neural Networks.
[18] Dhananjay S. Phatak,et al. Complete and partial fault tolerance of feedforward neural nets , 1995, IEEE Trans. Neural Networks.
[19] Andrew Chi-Sing Leung,et al. On Weight-Noise-Injection Training , 2009, ICONIP.
[20] Yves Grandvalet,et al. Noise Injection: Theoretical Prospects , 1997, Neural Computation.
[21] Andrew Chi-Sing Leung,et al. On Objective Function, Regularizer, and Prediction Error of a Learning Algorithm for Dealing With Multiplicative Weight Noise , 2009, IEEE Transactions on Neural Networks.
[22] Andreu Català,et al. Fault tolerance parameter model of radial basis function networks , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).
[23] C. H. Sequin,et al. Fault tolerance in artificial neural networks , 1990, 1990 IJCNN International Joint Conference on Neural Networks.
[24] Karen Drukker,et al. A study of the effect of noise injection on the training of artificial neural networks , 2009, 2009 International Joint Conference on Neural Networks.
[25] John Sum,et al. SNIWD: Simultaneous Weight Noise Injection with Weight Decay for MLP Training , 2009, ICONIP.
[26] Ignacio Rojas,et al. Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization , 2000, Neural Processing Letters.
[27] H. White. Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network Models , 1989 .
[28] Salvatore Cavalieri,et al. A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks , 1999, Neural Networks.
[29] Dhananjay S. Phatak,et al. Investigating the Fault Tolerance of Neural Networks , 2005, Neural Computation.
[30] A. F. Murray,et al. Fault tolerance via weight noise in analog VLSI implementations of MLPs-a case study with EPSILON , 1998 .
[31] Andrew Chi-Sing Leung,et al. On the Selection of Weight Decay Parameter for Faulty Networks , 2010, IEEE Transactions on Neural Networks.
[32] Yogesh Singh,et al. Fault tolerance of feedforward artificial neural networks- a framework of study , 2003, Proceedings of the International Joint Conference on Neural Networks, 2003..
[33] E. G. Gladyshev. On Stochastic Approximation , 1965 .
[34] Dhananjay S. Phatak. Relationship between fault tolerance, generalization and the Vapnik-Chervonenkis (VC) dimension of feedforward ANNs , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).
[35] Guozhong An,et al. The Effects of Adding Noise During Backpropagation Training on a Generalization Performance , 1996, Neural Computation.
[36] Julio Ortega Lopera,et al. Assessing the Noise Immunity and Generalization of Radial Basis Function Networks , 2004, Neural Processing Letters.
[37] A Learning Algorithm for Fault Tolerant Feedforward Neural Networks , 1996 .
[38] Itsuo Takanami,et al. A fault-value injection approach for multiple-weight-fault tolerance of MNNs , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.
[39] Christopher M. Bishop,et al. Current address: Microsoft Research, , 2022 .