Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection

Variable selection in inferential modelling is problematic when the number of variables is large relative to the number of data points, especially when multicollinearity is present. A variety of techniques have been described to identify ‘important’ subsets of variables from within a large parameter space but these may produce different results which creates difficulties with inference and reproducibility. Our aim was evaluate the extent to which variable selection would change depending on statistical approach and whether triangulation across methods could enhance data interpretation. A real dataset containing 408 subjects, 337 explanatory variables and a normally distributed outcome was used. We show that with model hyperparameters optimised to minimise cross validation error, ten methods of automated variable selection produced markedly different results; different variables were selected and model sparsity varied greatly. Comparison between multiple methods provided valuable additional insights. Two variables that were consistently selected and stable across all methods accounted for the majority of the explainable variability; these were the most plausible important candidate variables. Further variables of importance were identified from evaluating selection stability across all methods. In conclusion, triangulation of results across methods, including use of covariate stability, can greatly enhance data interpretation and confidence in variable selection.

[1]  Liu Jingyuan,et al.  A selective overview of feature screening for ultrahigh-dimensional data , 2015, Science China Mathematics.

[2]  Piotr Fryzlewicz,et al.  Ranking-Based Variable Selection for high-dimensional data , 2020, Statistica Sinica.

[3]  It’s time to talk about ditching statistical significance , 2019, Nature.

[4]  Jian Huang,et al.  COORDINATE DESCENT ALGORITHMS FOR NONCONVEX PENALIZED REGRESSION, WITH APPLICATIONS TO BIOLOGICAL FEATURE SELECTION. , 2011, The annals of applied statistics.

[5]  N. Meinshausen,et al.  Stability selection , 2008, 0809.2932.

[6]  T. DeLeire,et al.  Polarization and Rising Wage Inequality: Comparing the U.S. And Germany , 2018, SSRN Electronic Journal.

[7]  Reza Drikvandi,et al.  On Regularisation Methods for Analysis of High Dimensional Data , 2019, Annals of Data Science.

[8]  L. Breiman Heuristics of instability and stabilization in model selection , 1996 .

[9]  Martin J. Green,et al.  Use of bootstrapped, regularised regression to identify factors associated with lamb-derived revenue on commercial sheep farms. , 2019, Preventive veterinary medicine.

[10]  Qing-Song Xu,et al.  Multi-step adaptive elastic-net: reducing false positives in high-dimensional variable selection , 2015 .

[11]  Arthur E. Hoerl,et al.  Ridge Regression: Biased Estimation for Nonorthogonal Problems , 2000, Technometrics.

[12]  J. Freidman,et al.  Multivariate adaptive regression splines , 1991 .

[13]  Hao Helen Zhang,et al.  ON THE ADAPTIVE ELASTIC-NET WITH A DIVERGING NUMBER OF PARAMETERS. , 2009, Annals of statistics.

[14]  A. E. Hoerl,et al.  Ridge Regression: Applications to Nonorthogonal Problems , 1970 .

[15]  Willi Sauerbrei,et al.  The Use of Resampling Methods to Simplify Regression Models in Medical Statistics , 1999 .

[16]  F. Dominici,et al.  Reproducible epidemiologic research. , 2006, American journal of epidemiology.

[17]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[18]  Carol Jagger,et al.  Assessing the validity of the Global Activity Limitation Indicator in fourteen European countries , 2015, BMC Medical Research Methodology.

[19]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[20]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[21]  Ashutosh Kumar Singh,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction , 2010 .

[22]  Cun-Hui Zhang Nearly unbiased variable selection under minimax concave penalty , 2010, 1002.4734.

[23]  William N. Venables,et al.  Modern Applied Statistics with S , 2010 .

[24]  M. Munafo,et al.  Robust research needs many lines of evidence , 2018, Nature.

[25]  P. Groenen,et al.  SparseStep: Approximating the Counting Norm for Sparse Regularization , 2015, 1701.06967.

[26]  Tso-Jung Yen,et al.  Discussion on "Stability Selection" by Meinshausen and Buhlmann , 2010 .

[27]  Loann David Denis Desboulets,et al.  A Review on Variable Selection in Regression Analysis , 2018, Econometrics.

[28]  H. Zou,et al.  Regularization and variable selection via the elastic net , 2005 .

[29]  O. Morozova,et al.  Comparison of subset selection methods in linear regression in the context of health-related quality of life and substance abuse in Russia , 2015, BMC Medical Research Methodology.

[30]  Trevor Hastie,et al.  Statistical Learning with Sparsity: The Lasso and Generalizations , 2015 .

[31]  Klaus Nordhausen,et al.  The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition by Trevor Hastie, Robert Tibshirani, Jerome Friedman , 2009 .

[32]  L. Wasserman,et al.  HIGH DIMENSIONAL VARIABLE SELECTION. , 2007, Annals of statistics.

[33]  Luca Baldassarre,et al.  Sparsity Is Better with Stability: Combining Accuracy and Stability for Model Selection in Brain Decoding , 2017, Front. Neurosci..

[34]  Jianqing Fan,et al.  Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties , 2001 .

[35]  Jianqing Fan,et al.  Nonconcave penalized likelihood with a diverging number of parameters , 2004, math/0406466.

[36]  A. Gelman Scaling regression inputs by dividing by two standard deviations , 2008, Statistics in medicine.