Fault and Noise Tolerance in the Incremental Extreme Learning Machine

The extreme learning machine (ELM) is an efficient way to build single-hidden-layer feedforward networks (SLFNs). However, its fault tolerant ability is very weak. When node noise or node failure exist in a network trained by the ELM concept, the performance of the network is greatly degraded if a countermeasure is not taken. However, this kind of countermeasure for the ELM or incremental learning is seldom reported. This paper considers the situation that a trained SLFN suffers from the coexistence of node fault and node noise. We develop two fault tolerant incremental ELM algorithms for the regression problem, namely node fault tolerant incremental ELM (NFTI-ELM) and node fault tolerant convex incremental ELM (NFTCI-ELM). The NFTI-ELM determines the output weight of the newly inserted node only. We prove that in terms of the training set mean squared error (MSE) of faulty SLFNs, the NFTI-ELM converges. Our numerical results show that the NFTI-ELM is superior to the conventional ELM and incremental ELM algorithms under faulty situations. To further improve the performance, we propose the NFTCI-ELM algorithm. It not only determines the output weight of the newly inserted node, but also updates all previously trained output weights. In terms of training set MSE of faulty SLFNs, the NFTCI-ELM converges, and it is superior to the NFTI-ELM.

[1]  Yong Yang,et al.  Leukocyte image segmentation by visual attention and extreme learning machine , 2011, Neural Computing and Applications.

[2]  Han Zhao,et al.  Extreme learning machine: algorithm, theory and applications , 2013, Artificial Intelligence Review.

[3]  Shuai Li,et al.  Inverse-Free Extreme Learning Machine With Optimal Information Updating , 2016, IEEE Transactions on Cybernetics.

[4]  Victor C. M. Leung,et al.  Extreme Learning Machines [Trends & Controversies] , 2013, IEEE Intelligent Systems.

[5]  Dezhong Peng,et al.  Multi-View Linear Discriminant Analysis Network , 2019, IEEE Transactions on Image Processing.

[6]  Guang-Bin Huang,et al.  Extreme Learning Machine for Multilayer Perceptron , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[7]  Xia Liu,et al.  Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part I) , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[8]  Q. M. Jonathan Wu,et al.  Human action recognition using extreme learning machine based on visual vocabularies , 2010, Neurocomputing.

[9]  Richard M. Voyles,et al.  Artificial neural network performance degradation under network damage: Stuck-at faults , 2011, The 2011 International Joint Conference on Neural Networks.

[10]  Jenq-Neng Hwang,et al.  Finite Precision Error Analysis of Neural Network Hardware Implementations , 1993, IEEE Trans. Computers.

[11]  Chi-Man Vong,et al.  Local Receptive Fields Based Extreme Learning Machine , 2015, IEEE Computational Intelligence Magazine.

[12]  Xia Liu,et al.  Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part I) , 2015, IEEE Trans. Neural Networks Learn. Syst..

[13]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.

[14]  B. Liu,et al.  Error analysis of digital filters realized with floating-point arithmetic , 1969 .

[15]  Ulrich Rückert,et al.  Robustness of radial basis functions , 2005, Neurocomputing.

[16]  Andrew Chi-Sing Leung,et al.  Objective Function and Learning Algorithm for the General Node Fault Situation , 2016, IEEE Transactions on Neural Networks and Learning Systems.

[17]  Guang-Bin Huang,et al.  Convex incremental extreme learning machine , 2007, Neurocomputing.

[18]  Kazuyuki Murase,et al.  Injecting Chaos in Feedforward Neural Networks , 2011, Neural Processing Letters.

[19]  Kazuo Okanoya,et al.  Node perturbation learning without noiseless baseline , 2011, Neural Networks.

[20]  Xiaolin Hu,et al.  Comparison of $\ell _{1}$ -Norm SVR and Sparse Coding Algorithms for Linear Regression , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[21]  Max A. Little,et al.  Suitability of Dysphonia Measurements for Telemonitoring of Parkinson's Disease , 2008, IEEE Transactions on Biomedical Engineering.

[22]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[23]  Simone Orcioni,et al.  Training neural networks to be insensitive to weight random variations , 2000, Neural Networks.

[24]  Yong Dou,et al.  Multi-view clustering with extreme learning machine , 2016, Neurocomputing.

[25]  Mikko H. Lipasti,et al.  Automatic abstraction and fault tolerance in cortical microachitectures , 2011, 2011 38th Annual International Symposium on Computer Architecture (ISCA).

[26]  Chao Chen,et al.  Optimization of a Multilayer Neural Network by Using Minimal Redundancy Maximal Relevance-Partial Mutual Information Clustering With Least Square Regression , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[27]  Masashi Sugiyama,et al.  Optimal design of regularization term and regularization parameter by subspace information criterion , 2002, Neural Networks.

[28]  Andrew R. Barron,et al.  Universal approximation bounds for superpositions of a sigmoidal function , 1993, IEEE Trans. Inf. Theory.

[29]  Xiaodong Li,et al.  Extreme learning machine based transfer learning for data classification , 2016, Neurocomputing.

[30]  Wei-Yun Yau,et al.  Structured AutoEncoders for Subspace Clustering , 2018, IEEE Transactions on Image Processing.

[31]  Caro Lucas,et al.  Relaxed Fault-Tolerant Hardware Implementation of Neural Networks in the Presence of Multiple Transient Errors , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[32]  Guang-Bin Huang,et al.  Extreme learning machine: a new learning scheme of feedforward neural networks , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[33]  Dias F. Morgado,et al.  Fault Tolerance of Artificial Neural Networks: an Open Discussion for a Global Model , 2010 .

[34]  Ignacio Rojas,et al.  Obtaining Fault Tolerant Multilayer Perceptrons Using an Explicit Regularization , 2000, Neural Processing Letters.

[35]  ImplementationsJames B. BurrDepartment Digital Neural Network Implementations , 1995 .

[36]  Zbigniew Telec,et al.  Nonparametric Statistical Analysis of Machine Learning Algorithms for Regression Problems , 2010, KES.

[37]  Hod Lipson,et al.  Optimal Experiment Design for Coevolutionary Active Learning , 2014, IEEE Transactions on Evolutionary Computation.

[38]  Ge Yu,et al.  A-ELM⁎: Adaptive Distributed Extreme Learning Machine with MapReduce , 2016, Neurocomputing.

[39]  Adam P. Piotrowski,et al.  Comparison of evolutionary computation techniques for noise injected neural network training to estimate longitudinal dispersion coefficients in rivers , 2012, Expert Syst. Appl..

[40]  Andrew Chi-Sing Leung,et al.  Fault-Tolerant Incremental Learning for Extreme Learning Machines , 2016, ICONIP.

[41]  Chalapathy Neti,et al.  Maximally fault-tolerant neural networks and nonlinear programming , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[42]  Vincenzo Piuri,et al.  Fault tolerance in neural networks: theoretical analysis and simulation results , 1991, [1991] Proceedings, Advanced Computer Technology, Reliable Systems and Applications.

[43]  Wei Wu,et al.  Deterministic convergence of an online gradient method for BP neural networks , 2005, IEEE Transactions on Neural Networks.

[44]  Andrew Chi-Sing Leung,et al.  Convergence and Objective Functions of Some Fault/Noise-Injection-Based Online Learning Algorithms for RBF Networks , 2010, IEEE Transactions on Neural Networks.

[45]  Kurt Hornik,et al.  Approximation capabilities of multilayer feedforward networks , 1991, Neural Networks.

[46]  Andrew Chi-Sing Leung,et al.  A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[47]  Zhiping Lin,et al.  Extreme Learning Machine for Clustering , 2015 .

[48]  Itsuo Takanami,et al.  An FPGA-based multiple-weight-and-neuron-fault tolerant digital multilayer perceptron , 2013, Neurocomputing.

[49]  Lawrence O. Hall,et al.  Active cleaning of label noise , 2016, Pattern Recognit..

[50]  Ignacio Rojas,et al.  A Quantitative Study of Fault Tolerance, Noise Immunity, and Generalization Ability of MLPs , 2000, Neural Computation.

[51]  Trevor J. Hastie,et al.  Confidence intervals for random forests: the jackknife and the infinitesimal jackknife , 2013, J. Mach. Learn. Res..

[52]  Osonde Osoba,et al.  Noise-enhanced clustering and competitive learning algorithms , 2013, Neural Networks.

[53]  Gonzalo Carvajal,et al.  Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition , 2014, Neural Networks.

[54]  Ulrich Rückert,et al.  Tolerance of Radial Basis Functions Against Stuck-At-Faults , 2005, ICANN.