An Improved Extreme Learning Machine with Parallelized Feature Mapping Structures

Compared with deep neural network which is trained using back propagation, the extreme learning machine (ELM) learns thousands of times faster but still produces good generalization performance. To better understand the ELM, this paper studies the effect of noise on the input nodes or hidden neurons. It was found that there is no effect on the performance of ELM when small amount of noise is added to the input or the neurons in the hidden layer. Although the performance of ELM would improve with an increase in the number of neurons in the hidden layer, beyond a certain limit, this could lead to overfitting. In view of this, a parallel ELM (P-ELM) is proposed to improve the system performance. P-ELM has better robustness to noise due to the ensemble nature and is less susceptible to overfitting since each parallel hidden layer has only a moderate number of hidden neurons. Experimental results have indicated that the proposed P-ELM can achieve better classification performance than ELM without large increase in training time.

[1]  Danwei Wang,et al.  Sparse Extreme Learning Machine for Classification , 2014, IEEE Transactions on Cybernetics.

[2]  Q. M. Jonathan Wu,et al.  Incremental Learning in Human Action Recognition Based on Snippets , 2012, IEEE Transactions on Circuits and Systems for Video Technology.

[3]  Geoffrey E. Hinton,et al.  Learning representations by back-propagating errors , 1986, Nature.

[4]  A.H. Nizar,et al.  Power Utility Nontechnical Loss Analysis With Extreme Learning Machine Method , 2008, IEEE Transactions on Power Systems.

[5]  Guang-Bin Huang,et al.  Extreme Learning Machine for Multilayer Perceptron , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[6]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[7]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[8]  Hongming Zhou,et al.  Optimization method based extreme learning machine for classification , 2010, Neurocomputing.

[9]  Hongming Zhou,et al.  Extreme Learning Machine for Regression and Multiclass Classification , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  Honglak Lee,et al.  Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations , 2009, ICML '09.

[11]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[12]  Yicong Zhou,et al.  Extreme Learning Machine With Composite Kernels for Hyperspectral Image Classification , 2015, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

[13]  Chi-Man Vong,et al.  Local Receptive Fields Based Extreme Learning Machine , 2015, IEEE Computational Intelligence Magazine.

[14]  Zhu-Hong You,et al.  Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis , 2013, BMC Bioinformatics.

[15]  Yann LeCun,et al.  What is the best multi-stage architecture for object recognition? , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[16]  Michael A. Arbib,et al.  The handbook of brain theory and neural networks , 1995, A Bradford book.

[17]  Yingwei Zhang,et al.  Optimization of nonlinear process based on sequential extreme learning machine , 2011 .

[18]  Guang-Bin Huang,et al.  An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels , 2014, Cognitive Computation.