Data mining is a form of knowledge discovery required for solving problems in a specific domain. Classification is a technique used for discovering class labels of unknown data. Different methods for classification exists like bayesian, decision trees, rule based, neural networks etc. Before applying any mining technique, irrelevant and redundant features needs to be removed. Filtering is done using different feature selection techniques like wrapper, filter, and hybrid. The central idea of feature selection is to select a subset of input variables by eliminating features with little or no predictive information. Its direct benefits included building simpler and more comprehensible models, improving performance, and helping organize, clean, and understand data. This paper presents different feature selection methods and their accuracy and performance which show the better technique for improving classification accuracy. Key words—Classification, data mining, feature selection technique, weka in classification, wrapper method, wrapper method ________________________________________________________________________________________________________
[1]
Dominique Laurent,et al.
Informative Variables Selection for Multi-relational Supervised Learning
,
2011,
MLDM.
[2]
Huan Liu,et al.
Feature Selection: An Ever Evolving Frontier in Data Mining
,
2010,
FSDM.
[3]
Huan Liu,et al.
Feature Selection for Classification
,
1997,
Intell. Data Anal..
[4]
Lawrence B. Holder,et al.
Current And Future Trends In Feature Selection And Extraction For Classification Problems
,
2005,
Int. J. Pattern Recognit. Artif. Intell..
[5]
Sunita Beniwal,et al.
Classification and Feature Selection Techniques in Data Mining
,
2012
.
[6]
Wilfried N. Gansterer,et al.
On the Relationship Between Feature Selection and Classification Accuracy
,
2008,
FSDM.