Dynamic Adjustment of Hidden Node Parameters for Extreme Learning Machine

Extreme learning machine (ELM), proposed by Huang et al., was developed for generalized single hidden layer feedforward networks with a wide variety of hidden nodes. ELMs have been proved very fast and effective especially for solving function approximation problems with a predetermined network structure. However, it may contain insignificant hidden nodes. In this paper, we propose dynamic adjustment ELM (DA-ELM) that can further tune the input parameters of insignificant hidden nodes in order to reduce the residual error. It is proved in this paper that the energy error can be effectively reduced by applying recursive expectation-minimization theorem. In DA-ELM, the input parameters of insignificant hidden node are updated in the decreasing direction of the energy error in each step. The detailed theoretical foundation of DA-ELM is presented in this paper. Experimental results show that the proposed DA-ELM is more efficient than the state-of-art algorithms such as Bayesian ELM, optimally-pruned ELM, two-stage ELM, Levenberg-Marquardt, sensitivity-based linear learning method as well as the preliminary ELM.

[1]  Hongming Zhou,et al.  Extreme Learning Machine for Regression and Multiclass Classification , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[2]  Zexuan Zhu,et al.  A fast pruned-extreme learning machine for classification problem , 2008, Neurocomputing.

[3]  Herbert Jaeger,et al.  A tutorial on training recurrent neural networks , covering BPPT , RTRL , EKF and the " echo state network " approach - Semantic Scholar , 2005 .

[4]  Tomás Pevný,et al.  Statistically undetectable jpeg steganography: dead ends challenges, and opportunities , 2007, MM&Sec.

[5]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[6]  Antonio J. Serrano,et al.  BELM: Bayesian Extreme Learning Machine , 2011, IEEE Transactions on Neural Networks.

[7]  Shang-Liang Chen,et al.  Orthogonal least squares learning algorithm for radial basis function networks , 1991, IEEE Trans. Neural Networks.

[8]  Johan A. K. Suykens,et al.  Least Squares Support Vector Machine Classifiers , 1999, Neural Processing Letters.

[9]  Timo Similä,et al.  Multiresponse Sparse Regression with Application to Multidimensional Scaling , 2005, ICANN.

[10]  George W. Irwin,et al.  A fast nonlinear model identification method , 2005, IEEE Transactions on Automatic Control.

[11]  Mohammad Bagher Menhaj,et al.  Training feedforward networks with the Marquardt algorithm , 1994, IEEE Trans. Neural Networks.

[12]  Jessica J. Fridrich,et al.  Calibration revisited , 2009, MM&Sec '09.

[13]  Zongben Xu,et al.  Dynamic Extreme Learning Machine and Its Approximation Capability , 2013, IEEE Transactions on Cybernetics.

[14]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[15]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.

[16]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[17]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[18]  Amparo Alonso-Betanzos,et al.  A Very Fast Learning Method for Neural Networks Based on Sensitivity Analysis , 2006, J. Mach. Learn. Res..

[19]  Yuan Lan,et al.  Two-stage extreme learning machine for regression , 2010, Neurocomputing.

[20]  Min Han,et al.  A modified fast recursive hidden nodes selection algorithm for ELM , 2012, The 2012 International Joint Conference on Neural Networks (IJCNN).

[21]  C. R. Rao,et al.  Generalized Inverse of Matrices and its Applications , 1972 .

[22]  Y Lu,et al.  A Sequential Learning Scheme for Function Approximation Using Minimal Radial Basis Function Neural Networks , 1997, Neural Computation.

[23]  Robert K. L. Gay,et al.  Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning , 2009, IEEE Transactions on Neural Networks.

[24]  Ah Chung Tsoi,et al.  Universal Approximation Using Feedforward Neural Networks: A Survey of Some Existing Methods, and Some New Results , 1998, Neural Networks.

[25]  John C. Platt A Resource-Allocating Network for Function Interpolation , 1991, Neural Computation.

[26]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[27]  Amaury Lendasse,et al.  OP-ELM: Optimally Pruned Extreme Learning Machine , 2010, IEEE Transactions on Neural Networks.

[28]  Zongben Xu,et al.  Universal Approximation of Extreme Learning Machine With Adaptive Growth of Hidden Nodes , 2012, IEEE Transactions on Neural Networks and Learning Systems.