Lightweight Feature Selection Methods Based on Standardized Measure of Dispersion for Mining Big Data

Big data analytics is emerging as an important research field nowadays with many technical challenges that confront both commercial IT deployment and big data research communities. One of the inherent problems of big data is the curse of dimensionality. Modern data are described with many attributes and stored with high dimensions. In data analytics, feature selection has been popularly used to lighten the processing load in inducing a data mining model. However, when mining high dimensional data the search space from which an optimal feature subset is to be derived grows exponentially. That leads to an intractable demand in computation. In order to tackle this problem of high-dimensionality and the challenge of achieving high-speed processing over big data, a collection of novel lightweight feature selection methods is proposed in this paper. The feature selection methods are designed particularly for processing high-dimensional data quickly, by fast clustering and separating attributes using the standardized measure of dispersion. For performance evaluation, several types of big data with large degrees of dimensionality are put under test with our new feature selection algorithms. The new methods achieve enhanced classification accuracy within a relatively short time in comparison to existing feature selection methods.

[1]  Simon Fong,et al.  Feature Selection in Life Science Classification: Metaheuristic Swarm Search , 2014, IT Professional.

[2]  Simon Fong,et al.  A novel feature selection by clustering coefficients of variations , 2014, Ninth International Conference on Digital Information Management (ICDIM 2014).

[3]  Daniela Pessani,et al.  Importance of feature selection in decision-tree and artificial-neural-network ecological applications. Alburnus alburnus alborella: A practical example , 2010, Ecol. Informatics.

[4]  Constantin F. Aliferis,et al.  Time and sample efficient discovery of Markov blankets and direct causal relations , 2003, KDD '03.

[5]  Ibrar Yaqoob,et al.  A survey of big data management: Taxonomy and state-of-the-art , 2016, J. Netw. Comput. Appl..

[6]  Lei Yu,et al.  Fast Correlation Based Filter (FCBF) with a different search strategy , 2008, 2008 23rd International Symposium on Computer and Information Sciences.

[7]  Wilfried N. Gansterer,et al.  On the Relationship Between Feature Selection and Classification Accuracy , 2008, FSDM.

[8]  Lloyd A. Smith,et al.  Feature Selection for Machine Learning: Comparing a Correlation-Based Filter Approach to the Wrapper , 1999, FLAIRS.

[9]  Qinbao Song,et al.  A Fast Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data , 2013, IEEE Transactions on Knowledge and Data Engineering.

[10]  Phillip Stafford,et al.  Comparative study of classification algorithms for immunosignaturing data , 2012, BMC Bioinformatics.

[11]  Claudia Canali,et al.  An Energy-aware Scheduling Algorithm in DVFS-enabled Networked Data Centers , 2016, CLOSER.

[12]  Choudur K. Lakshminarayan,et al.  High Dimensional Big Data and Pattern Analysis: A Tutorial , 2013, BDA.

[13]  Enzo Baccarelli,et al.  Energy-efficient dynamic traffic offloading and reconfiguration of networked data centers for big data stream mobile computing: review, challenges, and a case study , 2016, IEEE Network.

[14]  Mohammad Shojafar,et al.  Resource Scheduling in Mobile Cloud Computing: Taxonomy and Open Challenges , 2015, 2015 IEEE International Conference on Data Science and Data Intensive Systems.

[15]  Elie Bienenstock,et al.  Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.

[16]  Abdullah Gani,et al.  A survey on indexing techniques for big data: taxonomy and performance evaluation , 2016, Knowledge and Information Systems.

[17]  Silvia Casado Yusta,et al.  Different metaheuristic strategies to solve the feature selection problem , 2009, Pattern Recognit. Lett..