Feature Selection through Minimization of the VC dimension

Feature selection involes identifying the most relevant subset of input features, with a view to improving generalization of predictive models by reducing overfitting. Directly searching for the most relevant combination of attributes is NP-hard. Variable selection is of critical importance in many applications, such as micro-array data analysis, where selecting a small number of discriminative features is crucial to developing useful models of disease mechanisms, as well as for prioritizing targets for drug discovery. The recently proposed Minimal Complexity Machine (MCM) provides a way to learn a hyperplane classifier by minimizing an exact (\boldmath{$\Theta$}) bound on its VC dimension. It is well known that a lower VC dimension contributes to good generalization. For a linear hyperplane classifier in the input space, the VC dimension is upper bounded by the number of features; hence, a linear classifier with a small VC dimension is parsimonious in the set of features it employs. In this paper, we use the linear MCM to learn a classifier in which a large number of weights are zero; features with non-zero weights are the ones that are chosen. Selected features are used to learn a kernel SVM classifier. On a number of benchmark datasets, the features chosen by the linear MCM yield comparable or better test set accuracy than when methods such as ReliefF and FCBF are used for the task. The linear MCM typically chooses one-tenth the number of attributes chosen by the other methods; on some very high dimensional datasets, the MCM chooses about $0.6\%$ of the features; in comparison, ReliefF and FCBF choose 70 to 140 times more features, thus demonstrating that minimizing the VC dimension may provide a new, and very effective route for feature selection and for learning sparse representations.

[1]  Alexander J. Smola,et al.  Learning with kernels , 1998 .

[2]  Ron Kohavi,et al.  Wrappers for Feature Subset Selection , 1997, Artif. Intell..

[3]  Peter J. Bickel,et al.  Hierarchical selection of variables in sparse high-dimensional regression , 2008, 0801.1158.

[4]  John Shawe-Taylor,et al.  A framework for structural risk minimisation , 1996, COLT '96.

[5]  Huan Liu,et al.  Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution , 2003, ICML.

[6]  Huan Liu,et al.  Advancing Feature Selection Research − ASU Feature Selection Repository , 2010 .

[7]  LarrañagaPedro,et al.  A review of feature selection techniques in bioinformatics , 2007 .

[8]  Gérard Dreyfus,et al.  Ranking a Random Feature for Variable and Feature Selection , 2003, J. Mach. Learn. Res..

[9]  Nicolas Chapados,et al.  Extensions to Metric-Based Model Selection , 2003, J. Mach. Learn. Res..

[10]  Jinbo Bi,et al.  Dimensionality Reduction via Sparse Support Vector Machines , 2003, J. Mach. Learn. Res..

[11]  Kari Torkkola,et al.  Feature Extraction by Non-Parametric Mutual Information Maximization , 2003, J. Mach. Learn. Res..

[12]  Charles M. Bishop Variational principal components , 1999 .

[13]  Pedro Larrañaga,et al.  A review of feature selection techniques in bioinformatics , 2007, Bioinform..

[14]  Alain Rakotomamonjy,et al.  Variable Selection Using SVM-based Criteria , 2003, J. Mach. Learn. Res..

[15]  James Theiler,et al.  Grafting: Fast, Incremental Feature Selection by Gradient Descent in Function Space , 2003, J. Mach. Learn. Res..

[16]  Huan Liu,et al.  Discretization: An Enabling Technique , 2002, Data Mining and Knowledge Discovery.

[17]  Rich Caruana,et al.  Benefitting from the Variables that Variable Selection Discards , 2003, J. Mach. Learn. Res..

[18]  Marko Robnik-Sikonja,et al.  Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF , 2004, Applied Intelligence.

[19]  Michal Linial,et al.  Unsupervised feature selection under perturbations: meeting the challenges of biological data , 2007, Bioinform..

[20]  Inderjit S. Dhillon,et al.  A Divisive Information-Theoretic Feature Clustering Algorithm for Text Classification , 2003, J. Mach. Learn. Res..

[21]  Isabelle Guyon,et al.  An Introduction to Variable and Feature Selection , 2003, J. Mach. Learn. Res..

[22]  Larry A. Rendell,et al.  The Feature Selection Problem: Traditional Methods and a New Algorithm , 1992, AAAI.

[23]  Naftali Tishby,et al.  Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity , 2005, NIPS.

[24]  R. O’Hara,et al.  A review of Bayesian variable selection methods: what, how and which , 2009 .

[25]  Dacheng Tao,et al.  On Preserving Original Variables in Bayesian PCA With Application to Image Analysis , 2012, IEEE Transactions on Image Processing.

[26]  Juha Reunanen,et al.  Overfitting in Making Comparisons Between Variable Selection Methods , 2003, J. Mach. Learn. Res..

[27]  Ran El-Yaniv,et al.  Distributional Word Clusters vs. Words for Text Categorization , 2003, J. Mach. Learn. Res..

[28]  Jayadeva Learning a hyperplane classifier by minimizing an exact bound on the VC dimension , 2015, Neurocomputing.

[29]  Huan Liu,et al.  Advancing feature selection research , 2010 .

[30]  Naftali Tishby,et al.  Sufficient Dimensionality Reduction , 2003, J. Mach. Learn. Res..

[31]  Léon Personnaz,et al.  MLPs (Mono-Layer Polynomials and Multi-Layer Perceptrons) for Nonlinear Modeling , 2003, J. Mach. Learn. Res..

[32]  Gavin Brown,et al.  Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection , 2012, J. Mach. Learn. Res..

[33]  Vladimir Vapnik,et al.  Statistical learning theory , 1998 .

[34]  John Shawe-Taylor,et al.  Structural Risk Minimization Over Data-Dependent Hierarchies , 1998, IEEE Trans. Inf. Theory.

[35]  George Forman,et al.  An Extensive Empirical Study of Feature Selection Metrics for Text Classification , 2003, J. Mach. Learn. Res..

[36]  Bernhard Schölkopf,et al.  Use of the Zero-Norm with Linear Models and Kernel Methods , 2003, J. Mach. Learn. Res..

[37]  Shinichi Nakajima,et al.  On Bayesian PCA: Automatic Dimensionality Selection and Analytic Solution , 2011, ICML.