Instance selection based on sample entropy for efficient data classification with ELM

Instance selection also named sample selection is an important preprocessing step for pattern classification. Almost all of the existing instance selection methods are developed for specific classifiers, such as nearest neighbor (NN) classifier, support vector machine (SVM) classifier. Few of them are designed for single hidden layer feed-forward neural networks (SLFNs) classifier. Based on sample entropy, this paper presents an instance selection method for efficient data classification with extreme learning machine (ELM), which is used to train a SLFN. The proposed method is compared with four state-of-the-art approaches by a series of experiments. The experimental results show that the proposed method can provide similar generalization performance but lower computation time complexity.

[1]  Catherine Blake,et al.  UCI Repository of machine learning databases , 1998 .

[2]  Yuan Lan,et al.  Constructive hidden nodes selection of extreme learning machine for regression , 2010, Neurocomputing.

[3]  Chee Kheong Siew,et al.  Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes , 2006, IEEE Transactions on Neural Networks.

[4]  Jun-Hai Zhai,et al.  Fuzzy decision tree based on fuzzy-rough technique , 2011, Soft Comput..

[5]  Tony R. Martinez,et al.  Reduction Techniques for Instance-Based Learning Algorithms , 2000, Machine Learning.

[6]  Dennis L. Wilson,et al.  Asymptotic Properties of Nearest Neighbor Rules Using Edited Data , 1972, IEEE Trans. Syst. Man Cybern..

[7]  G. Gates,et al.  The reduced nearest neighbor rule (Corresp.) , 1972, IEEE Trans. Inf. Theory.

[8]  Xizhao Wang,et al.  Upper integral network with extreme learning mechanism , 2011, Neurocomputing.

[9]  C. R. Rao,et al.  Generalized Inverse of Matrices and its Applications , 1972 .

[10]  Chris Mellish,et al.  Advances in Instance Selection for Instance-Based Learning Algorithms , 2002, Data Mining and Knowledge Discovery.

[11]  G. Gates The Reduced Nearest Neighbor Rule , 1998 .

[12]  Janez Demsar,et al.  Statistical Comparisons of Classifiers over Multiple Data Sets , 2006, J. Mach. Learn. Res..

[13]  Peter E. Hart,et al.  The condensed nearest neighbor rule (Corresp.) , 1968, IEEE Trans. Inf. Theory.

[14]  Fabrizio Angiulli,et al.  Fast Nearest Neighbor Condensation for Large Data Sets Classification , 2007, IEEE Transactions on Knowledge and Data Engineering.

[15]  I. Tomek An Experiment with the Edited Nearest-Neighbor Rule , 1976 .

[16]  Xizhao Wang,et al.  Dynamic ensemble extreme learning machine based on sample entropy , 2012, Soft Comput..

[17]  Dianhui Wang,et al.  Extreme learning machines: a survey , 2011, Int. J. Mach. Learn. Cybern..

[18]  Chee Kheong Siew,et al.  Extreme learning machine: Theory and applications , 2006, Neurocomputing.