Online training and its convergence for faulty networks with multiplicative weight noise
暂无分享,去创建一个
Andrew Chi-Sing Leung | Ruibin Feng | Wai-Yan Wan | Zi-Fa Han | A. Leung | Ruibin Feng | Zi-Fa Han | Wai-Yan Wan
[1] Guozhong An,et al. The Effects of Adding Noise During Backpropagation Training on a Generalization Performance , 1996, Neural Computation.
[2] Masashi Sugiyama,et al. Optimal design of regularization term and regularization parameter by subspace information criterion , 2002, Neural Networks.
[3] Sheng Chen,et al. Local regularization assisted orthogonal least squares regression , 2006, Neurocomputing.
[4] Robert I. Damper,et al. Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application , 1993, IEEE Trans. Neural Networks.
[5] Andrew Chi-Sing Leung,et al. On the Selection of Weight Decay Parameter for Faulty Networks , 2010, IEEE Transactions on Neural Networks.
[6] ImplementationsJames B. BurrDepartment. Digital Neural Network Implementations , 1995 .
[7] Ignacio Rojas,et al. Improving the tolerance of multilayer perceptrons by minimizing the statistical sensitivity to weight deviations , 2000, Neurocomputing.
[8] Zhi-Quan Luo,et al. On the Convergence of the LMS Algorithm with Adaptive Learning Rate for Linear Feedforward Networks , 1991, Neural Computation.
[9] Andreu Català,et al. Fault tolerance parameter model of radial basis function networks , 1996, Proceedings of International Conference on Neural Networks (ICNN'96).
[10] Andrew Chi-Sing Leung,et al. Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks , 2001, IEEE Trans. Neural Networks.
[11] Andrew Chi-Sing Leung,et al. On Objective Function, Regularizer, and Prediction Error of a Learning Algorithm for Dealing With Multiplicative Weight Noise , 2009, IEEE Transactions on Neural Networks.
[12] Bernard Widrow,et al. Sensitivity of feedforward neural networks to weight errors , 1990, IEEE Trans. Neural Networks.
[13] Shang-Liang Chen,et al. Orthogonal least squares learning algorithm for radial basis function networks , 1991, IEEE Trans. Neural Networks.
[14] Steve W. Piche,et al. The selection of weight accuracies for Madalines , 1995, IEEE Trans. Neural Networks.
[15] E.E. Swartzlander,et al. Digital neural network implementation , 1992, Eleventh Annual International Phoenix Conference on Computers and Communication [1992 Conference Proceedings].
[16] J. Sacks. Asymptotic Distribution of Stochastic Approximation Procedures , 1958 .
[17] Objective Function , 2017, Encyclopedia of Machine Learning and Data Mining.
[18] Ignacio Rojas,et al. Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization , 2000, Neural Processing Letters.
[19] Andrew Chi-Sing Leung,et al. Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks , 2010, IEEE Transactions on Neural Networks.
[20] Michael T. Manry,et al. LMS learning algorithms: misconceptions and new results on converence , 2000, IEEE Trans. Neural Networks Learn. Syst..
[21] Dhananjay S. Phatak. Relationship between fault tolerance, generalization and the Vapnik-Chervonenkis (VC) dimension of feedforward ANNs , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).
[22] John Moody,et al. Note on generalization, regularization and architecture selection in nonlinear learning systems , 1991, Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop.