Sparse Minimal Learning Machines Via L_1/2 Norm Regularization

The Minimal Learning Machine (MLM) is a supervised method in which learning consists of fitting a multiresponse linear regression model between distances computed from the input and output spaces. A critical issue related to the training process in MLMs is the selection of prototypes, also called reference points (RPs), from which distances are taken. In its original formulation, the MLM selects the RPs randomly from the data. In this paper we empirically show that the original random selection may lead to a poor generalization capability. In addition, we propose a novel pruning method for selecting RPs based on L_1/2 norm regularization. Our results show that the proposed method is able to outperform the original MLM and its variants.

[1]  Guilherme De A. Barreto,et al.  Performance comparison of classifiers in the detection of short circuit incipient fault in a three-phase induction motor , 2014, 2014 IEEE Symposium on Computational Intelligence for Engineering Solutions (CIES).

[2]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.

[3]  Carl D. Meyer,et al.  Matrix Analysis and Applied Linear Algebra , 2000 .

[4]  Zongben Xu,et al.  $L_{1/2}$ Regularization: A Thresholding Representation Theory and a Fast Solver , 2012, IEEE Transactions on Neural Networks and Learning Systems.

[5]  C. L. Philip Chen,et al.  A rapid supervised learning neural network for function interpolation and approximation , 1996, IEEE Trans. Neural Networks.

[6]  D. Marquardt An Algorithm for Least-Squares Estimation of Nonlinear Parameters , 1963 .

[7]  A. Asuncion,et al.  UCI Machine Learning Repository, University of California, Irvine, School of Information and Computer Sciences , 2007 .

[8]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevan e Ve tor Ma hine , 2001 .

[9]  David Zhang,et al.  A Survey of Sparse Representation: Algorithms and Applications , 2015, IEEE Access.

[10]  Wenyu Yang,et al.  A pruning algorithm with L1/2 regularizer for extreme learning machine , 2014, Journal of Zhejiang University SCIENCE C.

[11]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[12]  Bo He,et al.  A pruning ensemble model of extreme learning machine with $$L_{1/2}$$L1/2 regularizer , 2017, Multidimens. Syst. Signal Process..

[13]  Amaury Lendasse,et al.  Minimal Learning Machine: A novel supervised distance-based approach for regression and classification , 2015, Neurocomputing.

[14]  Long Li,et al.  Feature Selection Using Smooth Gradient L1/2 Regularization , 2017, ICONIP.

[15]  Amaury Lendasse,et al.  HSR: L1/2-regularized sparse representation for fast face recognition using hierarchical feature selection , 2014, Neural Computing and Applications.

[16]  Guang-Bin Huang,et al.  Extreme learning machine: a new learning scheme of feedforward neural networks , 2004, 2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541).

[17]  Jianhua Yang,et al.  Genetic ensemble of extreme learning machine , 2014, Neurocomputing.

[18]  Zhenxing Qian,et al.  Evolutionary selection extreme learning machine optimization for regression , 2012, Soft Comput..

[19]  José A. V. Florêncio,et al.  Um novo método baseado em protótipos para seleção de pontos de referência em máquinas de aprendizado mínimo , 2018 .

[20]  Zongben Xu,et al.  Regularization: Convergence of Iterative Half Thresholding Algorithm , 2014 .

[21]  João P. P. Gomes,et al.  Ensemble of Efficient Minimal Learning Machines for Classification and Regression , 2017, Neural Processing Letters.

[22]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..