Online training for single hidden-layer feedforward neural networks using RLS-ELM
暂无分享,去创建一个
[1] Vijanth S. Asirvadam,et al. Parallel and separable recursive Levenberg-Marquardt training algorithm , 2002, Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing.
[2] Paramasivan Saratchandran,et al. Performance evaluation of a sequential minimal radial basis function (RBF) neural network learning algorithm , 1998, IEEE Trans. Neural Networks.
[3] Visakan Kadirkamanathan,et al. A Function Estimation Approach to Sequential Learning with Neural Networks , 1993, Neural Computation.
[4] Narasimhan Sundararajan,et al. An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).
[5] A. Kai Qin,et al. Evolutionary extreme learning machine , 2005, Pattern Recognit..
[6] Y Lu,et al. A Sequential Learning Scheme for Function Approximation Using Minimal Radial Basis Function Neural Networks , 1997, Neural Computation.
[7] M. Viberg,et al. Adaptive neural nets filter using a recursive Levenberg-Marquardt search direction , 1998, Conference Record of Thirty-Second Asilomar Conference on Signals, Systems and Computers (Cat. No.98CH36284).
[8] Narasimhan Sundararajan,et al. A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks , 2006, IEEE Transactions on Neural Networks.
[9] Chee Kheong Siew,et al. Extreme learning machine: Theory and applications , 2006, Neurocomputing.
[10] John C. Platt. A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.
[11] Yonggwan Won,et al. An Improvement of Extreme Learning Machine for Compact Single-Hidden-Layer Feedforward Neural Networks , 2008, Int. J. Neural Syst..
[12] Narasimhan Sundararajan,et al. A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation , 2005, IEEE Transactions on Neural Networks.
[13] Peter L. Bartlett,et al. The Sample Complexity of Pattern Classification with Neural Networks: The Size of the Weights is More Important than the Size of the Network , 1998, IEEE Trans. Inf. Theory.