Some Tricks in Parameter Selection for Extreme Learning Machine

Extreme learning machine (ELM) is a widely used neural network with random weights (NNRW), which has made great contributions to many fields. However, the relationship between the parameters and the performance of ELM has not been fully investigated yet, i.e. the impact of the number of hidden layer nodes, the randomization range of the weights between the input layer and hidden layer, the randomization range of the threshold of hidden nodes, and the type of activation functions. In this paper, eight benchmark functions are used to study this relationship. We have some interesting findings, such as more hidden layer nodes cannot guarantee the best performance of ELM, the empirical randomization range of the hidden weights (i.e., [-1, 1]) and the empirical randomization range of the threshold of hidden nodes (i.e., [0, 1]) may not lead to the optimal performance of ELM models, and ELM with sigmoid as the activation function always achieves better performance on some regression problems than ELM with tribas as the activation function. We hope the findings from our work could provide a useful guidance for researchers to select right parameters for ELM.

[1]  Yu-Lin He,et al.  Fuzzy nonlinear regression analysis using a random weight network , 2016, Inf. Sci..

[2]  Guang-Bin Huang,et al.  Extreme Learning Machine for Multilayer Perceptron , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[3]  Zhixin Yang,et al.  Real-time fault diagnosis for gas turbine generator systems using extreme learning machine , 2014, Neurocomputing.

[4]  Zexuan Zhu,et al.  A fast pruned-extreme learning machine for classification problem , 2008, Neurocomputing.

[5]  Q. M. Jonathan Wu,et al.  Human face recognition based on multidimensional PCA and extreme learning machine , 2011, Pattern Recognit..

[6]  Y. Takefuji,et al.  Functional-link net computing: theory, system architecture, and functionalities , 1992, Computer.

[7]  Yaonan Wang,et al.  Bidirectional Extreme Learning Machine for Regression Problem and Its Learning Effectiveness , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[8]  Chunxia Zhang,et al.  A new deep neural network based on a stack of single-hidden-layer feedforward neural networks with randomly fixed hidden neurons , 2016, Neurocomputing.

[9]  Yves Chauvin,et al.  Backpropagation: theory, architectures, and applications , 1995 .

[10]  Feilong Cao,et al.  Extended feed forward neural networks with random weights for face recognition , 2014, Neurocomputing.

[11]  Yiqiang Chen,et al.  Semi-supervised deep extreme learning machine for Wi-Fi based localization , 2015, Neurocomputing.

[12]  P. N. Suganthan,et al.  A comprehensive evaluation of random vector functional link networks , 2016, Inf. Sci..

[13]  Lei Chen,et al.  Enhanced random search based incremental extreme learning machine , 2008, Neurocomputing.

[14]  Eid Emary,et al.  A hybrid dragonfly algorithm with extreme learning machine for prediction , 2016, 2016 International Symposium on INnovations in Intelligent SysTems and Applications (INISTA).

[15]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.

[16]  Ismail Boumhidi,et al.  Extreme learning machine for fault detection and isolation in wind turbine , 2016, 2016 International Conference on Electrical and Information Technologies (ICEIT).

[17]  Robert K. L. Gay,et al.  Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning , 2009, IEEE Transactions on Neural Networks.

[18]  Guang-Bin Huang,et al.  What are Extreme Learning Machines? Filling the Gap Between Frank Rosenblatt’s Dream and John von Neumann’s Puzzle , 2015, Cognitive Computation.

[19]  Guang-Bin Huang,et al.  Extreme learning machine: a new learning scheme of feedforward neural networks , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[20]  Ligang Liu,et al.  Projective Feature Learning for 3D Shapes with Multi‐View Depth Images , 2015, Comput. Graph. Forum.

[21]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.

[22]  Shifei Ding,et al.  A hybrid deep learning CNN-ELM model and its application in handwritten numeral recognition , 2015 .

[23]  Xingmin Zhao,et al.  An initial study on the rank of input matrix for extreme learning machine , 2018, Int. J. Mach. Learn. Cybern..

[24]  Ming Li,et al.  Insights into randomized algorithms for neural networks: Practical issues and common pitfalls , 2017, Inf. Sci..

[25]  Robert P. W. Duin,et al.  Feedforward neural networks with random weights , 1992, Proceedings., 11th IAPR International Conference on Pattern Recognition. Vol.II. Conference B: Pattern Recognition Methodology and Systems.