Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes
暂无分享,去创建一个
[1] S. Sathiya Keerthi,et al. Improvements to Platt's SMO Algorithm for SVM Classifier Design , 2001, Neural Computation.
[2] T. Poggio,et al. Networks and the best approximation property , 1990, Biological Cybernetics.
[3] William Voxman,et al. Advanced Calculus: An Introduction to Modern Analysis , 1981 .
[4] Geoffrey E. Hinton,et al. Adaptive Mixtures of Local Experts , 1991, Neural Computation.
[5] Ken-ichi Funahashi,et al. On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.
[6] George Cybenko,et al. Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..
[7] Chee Kheong Siew,et al. Extreme learning machine: Theory and applications , 2006, Neurocomputing.
[8] John C. Platt. A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.
[9] 水谷 博之,et al. SVM (Support Vector Machine) , 1999 .
[10] Jooyoung Park,et al. Universal Approximation Using Radial-Basis-Function Networks , 1991, Neural Computation.
[11] Yoshifusa Ito,et al. Approximation of continuous functions on Rd by linear combinations of shifted rotations of a sigmoid function with and without scaling , 1992, Neural Networks.
[12] Tomaso A. Poggio,et al. Extensions of a Theory of Networks for Approximation and Learning , 1990, NIPS.
[13] Chee Kheong Siew,et al. Real-time learning capability of neural networks , 2006, IEEE Trans. Neural Networks.
[14] Vladik Kreinovich,et al. Arbitrary nonlinearity is sufficient to represent all functions by neural networks: A theorem , 1991, Neural Networks.
[15] Vera Kurková,et al. Kolmogorov's theorem and multilayer neural networks , 1992, Neural Networks.
[16] Guang-Bin Huang,et al. Extreme learning machine: a new learning scheme of feedforward neural networks , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).
[17] E. Romero,et al. A new incremental method for function approximation using feed-forward neural networks , 2002, Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02 (Cat. No.02CH37290).
[18] Andrew R. Barron,et al. Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.
[19] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[20] Narasimhan Sundararajan,et al. A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation , 2005, IEEE Transactions on Neural Networks.
[21] Vladimir Vapnik,et al. Statistical learning theory , 1998 .
[22] Narasimhan Sundararajan,et al. Fully complex extreme learning machine , 2005, Neurocomputing.
[23] Chih-Jen Lin,et al. A comparison of methods for multiclass support vector machines , 2002, IEEE Trans. Neural Networks.
[24] Kurt Hornik,et al. Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.
[25] Yoshifusa Ito,et al. Approximation of functions on a compact set by finite sums of a sigmoid function without scaling , 1991, Neural Networks.
[26] Y Lu,et al. A Sequential Learning Scheme for Function Approximation Using Minimal Radial Basis Function Neural Networks , 1997, Neural Computation.
[27] Bernhard Schölkopf,et al. A tutorial on support vector regression , 2004, Stat. Comput..
[28] Allan Pinkus,et al. Multilayer Feedforward Networks with a Non-Polynomial Activation Function Can Approximate Any Function , 1991, Neural Networks.
[29] Catherine Blake,et al. UCI Repository of machine learning databases , 1998 .
[30] Ron Meir,et al. On the optimality of neural-network approximation using incremental algorithms , 2000, IEEE Trans. Neural Networks Learn. Syst..
[31] Gary G. R. Green,et al. Neural networks, approximation theory, and finite precision computation , 1995, Neural Networks.
[32] J. B. Roberts,et al. Elements of the theory of functions , 1967 .
[33] Paramasivan Saratchandran,et al. Performance evaluation of a sequential minimal radial basis function (RBF) neural network learning algorithm , 1998, IEEE Trans. Neural Networks.
[34] Robert A. Jacobs,et al. Hierarchical Mixtures of Experts and the EM Algorithm , 1993, Neural Computation.
[35] Visakan Kadirkamanathan,et al. A Function Estimation Approach to Sequential Learning with Neural Networks , 1993, Neural Computation.
[36] Enrique Romero Merino. Function aproximation with SAOCIF: a general sequential method and a particular algorithm with feed-forward neural networks , 2001 .
[37] Eric B. Baum,et al. On the capabilities of multilayer perceptrons , 1988, J. Complex..
[38] Guang-Bin Huang,et al. Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions , 1998, IEEE Trans. Neural Networks.
[39] Chee Kheong Siew,et al. Extreme learning machine: RBF network case , 2004, ICARCV 2004 8th Control, Automation, Robotics and Vision Conference, 2004..
[40] James T. Kwok,et al. Objective functions for training new hidden units in constructive neural networks , 1997, IEEE Trans. Neural Networks.
[41] J. Platt. Sequential Minimal Optimization : A Fast Algorithm for Training Support Vector Machines , 1998 .
[42] C. Siew,et al. Extreme Learning Machine with Randomly Assigned RBF Kernels , 2005 .
[43] Chee Kheong Siew,et al. Can threshold networks be trained directly? , 2006, IEEE Transactions on Circuits and Systems II: Express Briefs.
[44] Guang-Bin Huang,et al. Learning capability and storage capacity of two-hidden-layer feedforward networks , 2003, IEEE Trans. Neural Networks.
[45] Hong Chen,et al. Approximation capability in C(R¯n) by multilayer feedforward networks and related problems , 1995, IEEE Trans. Neural Networks.
[46] H. White,et al. Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions , 1989, International 1989 Joint Conference on Neural Networks.
[47] Chong-Ho Choi,et al. Constructive neural networks with piecewise interpolation capabilities for function approximations , 1994, IEEE Trans. Neural Networks.
[48] H. White,et al. There exists a neural network that does not make avoidable mistakes , 1988, IEEE 1988 International Conference on Neural Networks.
[49] Narasimhan Sundararajan,et al. An efficient sequential learning algorithm for growing and pruning RBF (GAP-RBF) networks , 2004, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).
[50] Christopher J. Merz,et al. UCI Repository of Machine Learning Databases , 1996 .