On improving the conditioning of extreme learning machine: A linear case

Recently Extreme Learning Machine (ELM) has been attracting attentions for its simple and fast training algorithm, which randomly selects input weights. Given sufficient hidden neurons, ELM has a comparable performance for a wide range of regression and classification problems. However, in this paper we argue that random input weight selection may lead to an ill-conditioned problem, for which solutions will be numerically unstable. In order to improve the conditioning of ELM, we propose an input weight selection algorithm for an ELM with linear hidden neurons. Experiment results show that by applying the proposed algorithm accuracy is maintained while condition is perfectly stable.

[1]  C. Fombrun,et al.  Matrix , 1979, Encyclopedic Dictionary of Archaeology.

[2]  Gene H. Golub,et al.  Matrix computations , 1983 .

[3]  Guang-Bin Huang,et al.  Extreme learning machine: a new learning scheme of feedforward neural networks , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[4]  Leo Breiman,et al.  Stacked regressions , 2004, Machine Learning.

[5]  Chunyan Miao,et al.  Enhanced Extreme Learning Machine with stacked generalization , 2008, 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence).