Feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a fast clustering-based feature selection algorithm, FAST, is proposed and experimentally evaluated in this paper. The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent, the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. To ensure the efficiency of FAST, we adopt the efficient minimum-spanning tree clustering method. The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Extensive experiments are carried out to compare FAST and several representative feature selection algorithms, namely, FCBF, ReliefF, CFS, Consist, and FOCUS-SF, with respect to four types of well-known classifiers, namely, the probability-based Naive Bayes, the tree-based C4.5, the instance-based IB1, and the rule-based RIPPER before and after feature selection. The results, on 35 publicly available real-world high dimensional image, microarray, and text data, demonstrate that FAST not only produces smaller subsets of features but also improves the performances of the four types of classifiers.
[1]
Sanmay Das,et al.
Filters, Wrappers and a Boosting-Based Hybrid for Feature Selection
,
2001,
ICML.
[2]
Inderjit S. Dhillon,et al.
A Divisive Information-Theoretic Feature Clustering Algorithm for Text Classification
,
2003,
J. Mach. Learn. Res..
[3]
Ron Kohavi,et al.
Wrappers for Feature Subset Selection
,
1997,
Artif. Intell..
[4]
Andrew McCallum,et al.
Distributional clustering of words for text classification
,
1998,
SIGIR '98.
[5]
Thomas G. Dietterich,et al.
Learning Boolean Concepts in the Presence of Many Irrelevant Features
,
1994,
Artif. Intell..
[6]
Edward R. Dougherty,et al.
Small Sample Issues for Microarray-Based Classification
,
2001,
Comparative and functional genomics.
[7]
Huan Liu,et al.
Feature Selection for Classification
,
1997,
Intell. Data Anal..
[8]
Huan Liu,et al.
Consistency Based Feature Selection
,
2000,
PAKDD.
[9]
George Forman,et al.
An Extensive Empirical Study of Feature Selection Metrics for Text Classification
,
2003,
J. Mach. Learn. Res..
[10]
Isabelle Guyon,et al.
An Introduction to Variable and Feature Selection
,
2003,
J. Mach. Learn. Res..
[11]
Antonio Arauzo-Azofra,et al.
A feature set measure based on Relief
,
2004
.
[12]
Ron Kohavi,et al.
Irrelevant Features and the Subset Selection Problem
,
1994,
ICML.
[13]
Thomas G. Dietterich,et al.
Algorithms for Identifying Relevant Features
,
1992
.
[14]
Mark A. Hall,et al.
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
,
1999,
ICML.
[15]
Godfried T. Toussaint,et al.
Relative neighborhood graphs and their relatives
,
1992,
Proc. IEEE.