Inverse-Free Incremental Learning Algorithms With Reduced Complexity for Regularized Extreme Learning Machine

The existing inverse-free incremental learning algorithm for the regularized extreme learning machine (ELM) was based on an inverse-free algorithm to update the regularized pseudo-inverse, which was deduced from an inverse-free recursive algorithm to update the inverse of a Hermitian matrix. Before that recursive algorithm was applied in the existing inverse-free ELM, its improved version had been utilized in previous literatures. Then from the improved recursive algorithm to update the inverse, we deduce a more efficient inverse-free algorithm to update the regularized pseudo-inverse, from which we propose the inverse-free incremental ELM algorithm based on regularized pseudo-inverse. Usually the above-mentioned inverse is smaller than the pseudo-inverse, while in the processor units with limited precision, the recursive algorithm to update the inverse may introduce numerical instabilities. Then to further reduce the computational complexity, we also propose the inverse-free incremental ELM algorithm based on the <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factors of the inverse, where the <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factors are updated iteratively by the inverse <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factorization. With respect to the existing inverse-free ELM, the proposed ELM based on regularized pseudo-inverse and that based on <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factors are expected to require only <inline-formula> <tex-math notation="LaTeX">$\frac {3}{8+M}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\frac {1}{8+M}$ </tex-math></inline-formula> of complexities, respectively, where <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math></inline-formula> is the output node number. The numerical experiments show that both the proposed ELM algorithms significantly accelerate the existing inverse-free ELM, and the speedup in training time is not less than 1.41. On the Modified National Institute of Standards and Technology (MNIST) Dataset, usually the proposed algorithm based on <inline-formula> <tex-math notation="LaTeX">${\mathrm {LDL}}^{T}$ </tex-math></inline-formula> factors is much faster than that based on regularized pseudo-inverse. On the other hand, in the numerical experiments, the original ELM, the existing inverse-free ELM and the proposed two ELM algorithms achieve the same performance in regression and classification, and result in the same solutions, which include the output weights and the output sequence for the same input sequence.

[1]  Ron Kohavi,et al.  Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid , 1996, KDD.

[2]  Petr Savický,et al.  Methods for multidimensional event classification: A case study using images from a Cherenkov gamma-ray telescope , 2004 .

[3]  Xiang-Gen Xia,et al.  On fast recursive algorithms for V-BLAST with optimal ordered SIC detection , 2009, IEEE Transactions on Wireless Communications.

[4]  Daniel S. Yeung,et al.  Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure , 2006, Neurocomputing.

[5]  Jacob Benesty,et al.  A fast recursive algorithm for optimum sequential signal detection in a BLAST system , 2003, IEEE Trans. Signal Process..

[6]  Yanika Kongsorot,et al.  An Incremental Kernel Extreme Learning Machine for Multi-Label Learning With Emerging New Labels , 2020, IEEE Access.

[7]  H. Luetkepohl The Handbook of Matrices , 1996 .

[8]  Huan Liu,et al.  A connectionist approach to generating oblique decision trees , 1999, IEEE Trans. Syst. Man Cybern. Part B.

[9]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[10]  Zexuan Zhu,et al.  A fast pruned-extreme learning machine for classification problem , 2008, Neurocomputing.

[11]  Qin Wan,et al.  Multilayer Incremental Hybrid Cost-Sensitive Extreme Learning Machine With Multiple Hidden Output Matrix and Subnetwork Hidden Nodes , 2019, IEEE Access.

[12]  Leszek Szczecinski,et al.  Low complexity adaptation of MIMO MMSE receivers, implementation aspects , 2005, GLOBECOM '05. IEEE Global Telecommunications Conference, 2005..

[13]  Athanasios Tsanas,et al.  Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools , 2012 .

[14]  Ehud D. Karnin,et al.  A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.

[15]  Zhongliang Jing,et al.  Incremental and Decremental Extreme Learning Machine Based on Generalized Inverse , 2017, IEEE Access.

[16]  Amaury Lendasse,et al.  OP-ELM: Optimally Pruned Extreme Learning Machine , 2010, IEEE Transactions on Neural Networks.

[17]  Hongming Zhou,et al.  Extreme Learning Machine for Regression and Multiclass Classification , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[18]  Amaury Lendasse,et al.  TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization , 2011, Neurocomputing.

[19]  Bin Li,et al.  Efficient Square-Root and Division Free Algorithms for Inverse LDLT Factorization and the Wide-Sense Givens Rotation with Application to V-BLAST , 2010, 2010 IEEE 72nd Vehicular Technology Conference - Fall.

[20]  Thomas G. Dietterich,et al.  Solving the Multiple Instance Problem with Axis-Parallel Rectangles , 1997, Artif. Intell..

[21]  Shuai Li,et al.  Inverse-Free Extreme Learning Machine With Optimal Information Updating , 2016, IEEE Transactions on Cybernetics.

[22]  Chu Kiong Loo,et al.  An Open-Ended Continual Learning for Food Recognition Using Class Incremental Extreme Learning Machines , 2020, IEEE Access.

[23]  Ba Tuan Le,et al.  Random Search Enhancement of Incremental Regularized Multiple Hidden Layers ELM , 2019, IEEE Access.

[24]  Jacob Benesty,et al.  A fast recursive algorithm for optimum sequential signal detection in a BLAST system , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[25]  Giovanna Castellano,et al.  An iterative pruning algorithm for feedforward neural networks , 1997, IEEE Trans. Neural Networks.

[26]  Bin Li,et al.  A Fast Recursive Algorithm for G-STBC , 2011, IEEE Transactions on Communications.

[27]  Eric Wing Ming Wong,et al.  Fault and Noise Tolerance in the Incremental Extreme Learning Machine , 2019, IEEE Access.

[28]  E. Oñate,et al.  Neural networks for variational problems in engineering , 2008 .

[29]  Wen Chen,et al.  Improved Fast Recursive Algorithms for V-BLAST and G-STBC with Novel Efficient Matrix Inversion , 2009, 2009 IEEE International Conference on Communications.

[30]  T. Moon,et al.  Mathematical Methods and Algorithms for Signal Processing , 1999 .

[31]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.