Feature Selection plays a major role in preprocessing stage of Data mining and helps in model construction by recognizing relevant features. Rough Sets has emerged in recent years as an important paradigm for feature selection i.e. finding Reduct of conditional attributes in given data set. Two control strategies for Reduct Computation are Sequential Forward Selection (SFS), Sequential Backward Elimination(SBE). With the objective of scalable feature seletion, several MapReduce based approaches were proposed in literature. All these approaches are SFS based and results in super set of reduct i.e. with redundant attributes. Even though SBE approaches results in exact Reduct, it requires lot of data movement in shuffle and sort phase of MapReduce. To overcome this problem and to optimize the network bandwidth utilization, a novel hashing supported SBE Reduct algorithm(MRSBER_Hash) is proposed in this work and implemented using Iterative MapReduce framework of Apache Spark. Experiments conducted on large benchmark decision systems have empirically established the relevance of proposed approach for decision systems with large cardinality of conditional attributes.
[1]
Guoyin Wang,et al.
Attribute Reduction for Massive Data Based on Rough Set Theory and MapReduce
,
2010,
RSKT.
[2]
C. Raghavendra Rao,et al.
Extensions to IQuickReduct
,
2011,
MIWAI.
[3]
Yi Pan,et al.
PLAR: Parallel Large-Scale Attribute Reduction on Cloud Systems
,
2013,
2013 International Conference on Parallel and Distributed Computing, Applications and Technologies.
[4]
Praveen Kumar Singh,et al.
Scalable Quick Reduct Algorithm: Iterative MapReduce Approach
,
2016,
CODS.
[5]
Richard Jensen,et al.
Rough Set-Based Feature Selection: A Review
,
2007
.
[6]
Praveen Kumar Singh,et al.
Scalable IQRA_IG Algorithm: An Iterative MapReduce Approach for Reduct Computation
,
2017,
ICDCIT.
[7]
Jerzy W. Grzymala-Busse,et al.
Rough Sets
,
1995,
Commun. ACM.
[8]
Geoffrey C. Fox,et al.
Twister: a runtime for iterative MapReduce
,
2010,
HPDC '10.