GA-SELM: Greedy algorithms for sparse extreme learning machine

In the last decade, extreme learning machine (ELM), which is a new learning algorithm for single-hidden layer feed forward networks (SLFNs), has gained much attention in the machine intelligence and pattern recognition communities with numerous successful real-world applications. The ELM structure has several advantageous such as good generalization performance with an extremely fast learning speed and low computational cost especially when dealing with many patterns defined in a high-dimensional space. However, three major problems usually appear using the ELM structure: (i) the dataset may have irrelevant variables, (ii) choosing the number of neurons in the hidden layer would be difficult, and (iii) it may encounter the singularity problem. To overcome these limitations, several methods have been proposed in the regularization framework. In this paper, we propose several sparse ELM schemes in which various greedy algorithms are used for sparse approximation of the output weights vector of the ELM network. In short, we name these new schemes as GA-SELM. We also investigate several greedy algorithms such as Compressive Sampling Matching Pursuit (CoSaMP), Iterative Hard Thresholding (IHT), Orthogonal Matching Pursuit (OMP) and Stagewise Orthogonal Matching Pursuit (StOMP) to obtain a regularized ELM scheme. These new ELM schemes have several benefits in comparing with the traditional ELM schemes such as low computational complexity, being free of parameter adjustment and avoiding the singularity problem. The proposed approach shows its significant advantages when it is compared with the empirical studies on nine commonly used regression benchmarks. Moreover, a comparison with the original ELM and the regularized ELM schemes is performed.

[1]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[2]  Holger Rauhut,et al.  Random Sampling of Sparse Trigonometric Polynomials, II. Orthogonal Matching Pursuit versus Basis Pursuit , 2008, Found. Comput. Math..

[3]  Hongming Zhou,et al.  Extreme Learning Machine for Regression and Multiclass Classification , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[4]  Bao-Liang Lu,et al.  EEG-based vigilance estimation using extreme learning machines , 2013, Neurocomputing.

[5]  Dianhui Wang,et al.  Extreme learning machines: a survey , 2011, Int. J. Mach. Learn. Cybern..

[6]  Tong Zhang,et al.  Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models , 2008, NIPS.

[7]  Min Yao,et al.  Extreme learning machine with multiple kernels , 2013, 2013 10th IEEE International Conference on Control and Automation (ICCA).

[8]  Jean-Luc Starck,et al.  Sparse Solution of Underdetermined Systems of Linear Equations by Stagewise Orthogonal Matching Pursuit , 2012, IEEE Transactions on Information Theory.

[9]  Ali Cafer Gurbuz,et al.  Comparison of iterative sparse recovery algorithms , 2011, 2011 IEEE 19th Signal Processing and Communications Applications Conference (SIU).

[10]  Trevor Hastie,et al.  Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.

[11]  Joel A. Tropp,et al.  Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit , 2007, IEEE Transactions on Information Theory.

[12]  J. Tropp,et al.  CoSaMP: Iterative signal recovery from incomplete and inaccurate samples , 2008, Commun. ACM.

[13]  Amaury Lendasse,et al.  A Methodology for Building Regression Models using Extreme Learning Machine: OP-ELM , 2008, ESANN.

[14]  Deanna Needell,et al.  Topics in Compressed Sensing , 2009, ArXiv.

[15]  Ronald Parr,et al.  Greedy Algorithms for Sparse Reinforcement Learning , 2012, ICML.

[16]  Joel A. Tropp,et al.  Sparse Approximation Via Iterative Thresholding , 2006, 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings.

[17]  H. Zou,et al.  Regularization and variable selection via the elastic net , 2005 .

[18]  Amaury Lendasse,et al.  TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization , 2011, Neurocomputing.

[19]  José David Martín-Guerrero,et al.  Regularized extreme learning machine for regression problems , 2011, Neurocomputing.

[20]  Qinghua Zheng,et al.  Regularized Extreme Learning Machine , 2009, 2009 IEEE Symposium on Computational Intelligence and Data Mining.

[21]  Andy J. Keane,et al.  Some Greedy Learning Algorithms for Sparse Regression and Classification with Mercer Kernels , 2003, J. Mach. Learn. Res..

[22]  Amaury Lendasse,et al.  OP-ELM: Optimally Pruned Extreme Learning Machine , 2010, IEEE Transactions on Neural Networks.

[23]  T. Blumensath,et al.  Iterative Thresholding for Sparse Approximations , 2008 .

[24]  Zexuan Zhu,et al.  A fast pruned-extreme learning machine for classification problem , 2008, Neurocomputing.