A fast training algorithm for extreme learning machine based on matrix decomposition

Extreme Learning Machine (ELM), a competitive machine learning technique for single-hidden-layer feedforward neural networks (SLFNNs), has proven to be efficient and effective algorithm for regression and classification problems. However, traditional ELM involves a large number of hidden nodes for complex real world regression and classification problems which increasing the computation burden. In this paper, a decomposition based fast ELM (DFELM) algorithm is proposed to effectively reduce the computational burden for large number of hidden nodes condition. Compared with ELM algorithm, DFELM algorithm has faster training time with a large number of hidden nodes maintaining the same accuracy performance. Experiment on three regression problems, six classification problems and a complex blast furnace modeling problem are carried out to verify the performance of DFELM algorithm. Moreover, the decomposition method can be extended to other modified ELM algorithms to further reduce the training time.

[1]  Qinyu. Zhu Extreme Learning Machine , 2013 .

[2]  Zongben Xu,et al.  Dynamic Extreme Learning Machine and Its Approximation Capability , 2013, IEEE Transactions on Cybernetics.

[3]  Benoît Frénay,et al.  Using SVMs with randomised feature spaces: an extreme learning approach , 2010, ESANN.

[4]  Amaury Lendasse,et al.  OP-ELM: Optimally Pruned Extreme Learning Machine , 2010, IEEE Transactions on Neural Networks.

[5]  David Burshtein,et al.  Support Vector Machine Training for Improved Hidden Markov Modeling , 2008, IEEE Transactions on Signal Processing.

[6]  Guang-Bin Huang,et al.  Extreme learning machine: a new learning scheme of feedforward neural networks , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[7]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.

[8]  Chao Wang,et al.  Distributed Extreme Learning Machine with kernels based on MapReduce , 2015, Neurocomputing.

[9]  Yiqiang Chen,et al.  Weighted extreme learning machine for imbalance learning , 2013, Neurocomputing.

[10]  Guang-Bin Huang,et al.  Convex incremental extreme learning machine , 2007, Neurocomputing.

[11]  Qing He,et al.  Extreme Support Vector Machine Classifier , 2008, PAKDD.

[12]  Hongming Zhou,et al.  Stacked Extreme Learning Machines , 2015, IEEE Transactions on Cybernetics.

[13]  Danwei Wang,et al.  Sparse Extreme Learning Machine for Classification , 2014, IEEE Transactions on Cybernetics.

[14]  Yong-Ping Zhao,et al.  Parsimonious regularized extreme learning machine based on orthogonal transformation , 2015, Neurocomputing.

[15]  Amaury Lendasse,et al.  Long-term time series prediction using OP-ELM , 2014, Neural Networks.

[16]  Hongming Zhou,et al.  Extreme Learning Machine for Regression and Multiclass Classification , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[17]  Chi-Man Vong,et al.  Sparse Bayesian Extreme Learning Machine for Multi-classification , 2014, IEEE Transactions on Neural Networks and Learning Systems.

[18]  Annalisa Riccardi,et al.  Cost-Sensitive AdaBoost Algorithm for Ordinal Regression Based on Extreme Learning Machine , 2014, IEEE Transactions on Cybernetics.

[19]  Narasimhan Sundararajan,et al.  A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks , 2006, IEEE Transactions on Neural Networks.

[20]  Cheng Wu,et al.  Semi-Supervised and Unsupervised Extreme Learning Machines , 2014, IEEE Transactions on Cybernetics.

[21]  Juan José Murillo-Fuentes,et al.  Nonlinear Channel Equalization With Gaussian Processes for Regression , 2008, IEEE Transactions on Signal Processing.

[22]  Lei Chen,et al.  Enhanced random search based incremental extreme learning machine , 2008, Neurocomputing.

[23]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.