A new perspective for Minimal Learning Machines: A lightweight approach

Abstract This paper introduces a new procedure to train Minimal Learning Machines (MLM) for regression tasks. Besides that, we propose a new prediction process in MLM. A well-known drawback concerning the (original) MLM model formulation is the lack of sparseness.The most recent efforts on this problem strongly rely on the selection of reference points before training and prediction steps in MLM, all based on some supposition regarding the data. In the opposite direction, here, we explore another formulation of MLM in which we do not rely on any assumption regarding the data for prior selection. Instead, our proposal, named Lightweight Minimal Learning Machine (LW-MLM), builds a regularized system that imposes sparseness. We thrive in such a sparse criterion, not by selection but instead using a piece of weighted information into the model. We validate the contributions of this paper through four types of experiments to evaluate different aspects of our proposal: the prediction error performance, the goodness-of-fit of estimated vs. measured values, the norm values which are related to the sparsity, and finally, the prediction error in high dimensional settings. Based on the results, we show that LW-MLM is a valid alternative since achieved similar or higher accuracy rates against other variants being all seen as statistically equivalent.

[1]  Bülent Sankur,et al.  Survey over image thresholding techniques and quantitative performance evaluation , 2004, J. Electronic Imaging.

[2]  Rossana M. de Castro Andrade,et al.  MLM-rank: A Ranking Algorithm Based on the Minimal Learning Machine , 2015, 2015 Brazilian Conference on Intelligent Systems (BRACIS).

[3]  Ajalmar R. da Rocha Neto,et al.  A Cost Sensitive Minimal Learning Machine for Pattern Classification , 2015, ICONIP.

[4]  Bingbing Ni,et al.  Order Preserving Sparse Coding , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  João P. P. Gomes,et al.  Ensemble of Efficient Minimal Learning Machines for Classification and Regression , 2017, Neural Processing Letters.

[6]  Jean-Jacques Fuchs,et al.  On sparse representations in arbitrary redundant bases , 2004, IEEE Transactions on Information Theory.

[7]  João P. P. Gomes,et al.  Fast Co-MLM: An Efficient Semi-supervised Co-training Method Based on the Minimal Learning Machine , 2017, New Generation Computing.

[8]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[9]  Tommi Kärkkäinen,et al.  Extreme minimal learning machine: Ridge regression with distance-based basis , 2019, Neurocomputing.

[10]  Tommi Kärkkäinen,et al.  A Robust Minimal Learning Machine based on the M-Estimator , 2017, ESANN.

[11]  B. Silverman,et al.  Some Aspects of the Spline Smoothing Approach to Non‐Parametric Regression Curve Fitting , 1985 .

[12]  F. Massey The Kolmogorov-Smirnov Test for Goodness of Fit , 1951 .

[13]  Yue Gao,et al.  Continuous Probability Distribution Prediction of Image Emotions via Multitask Shared Sparse Regression , 2017, IEEE Transactions on Multimedia.

[14]  Thomas S. Huang,et al.  Image Super-Resolution Via Sparse Representation , 2010, IEEE Transactions on Image Processing.

[15]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[16]  Ewa Niewiadomska-Szynkiewicz,et al.  Optimization Schemes For Wireless Sensor Network Localization , 2009, Int. J. Appl. Math. Comput. Sci..

[17]  Joel A. Tropp,et al.  Greed is good: algorithmic results for sparse approximation , 2004, IEEE Transactions on Information Theory.

[18]  Haibo Zhang,et al.  An improved recursive reduced least squares support vector regression , 2012, Neurocomputing.

[19]  Rui Zhang,et al.  Sparse least square support vector machine via coupled compressive pruning , 2014, Neurocomputing.