Correlation and single variable classifier methods are very simple algorithms to select a subset of variables in a dimension reduction problem, which utilize some measures to detect relevancy of a single variable to the target classes without considering the predictor properties to be used. In this paper, along with the description of correlation and single variable classifier ranking methods, the application of these algorithms to the NIPS 2003 Feature Selection Challenge problems is also presented. The results show that these methods can be used as one of primary, computational cost efficient, and easy to implement techniques which have good performance especially when variable space is very large. Also, it has been shown that in all cases using an ensemble averaging predictor would result in a better performance, compared to a single stand-alone predictor.
[1]
Eric Bauer,et al.
An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants
,
1999,
Machine Learning.
[2]
D. Opitz,et al.
Popular Ensemble Methods: An Empirical Study
,
1999,
J. Artif. Intell. Res..
[3]
Christopher M. Bishop,et al.
Neural networks for pattern recognition
,
1995
.
[4]
Simon Haykin,et al.
Neural Networks: A Comprehensive Foundation
,
1998
.
[5]
Isabelle Guyon,et al.
An Introduction to Variable and Feature Selection
,
2003,
J. Mach. Learn. Res..
[6]
Yoav Freund,et al.
Experiments with a New Boosting Algorithm
,
1996,
ICML.