Boosting the discriminatory power of sparse survival models via optimization of the concordance index and stability selection

BackgroundWhen constructing new biomarker or gene signature scores for time-to-event outcomes, the underlying aims are to develop a discrimination model that helps to predict whether patients have a poor or good prognosis and to identify the most influential variables for this task. In practice, this is often done fitting Cox models. Those are, however, not necessarily optimal with respect to the resulting discriminatory power and are based on restrictive assumptions. We present a combined approach to automatically select and fit sparse discrimination models for potentially high-dimensional survival data based on boosting a smooth version of the concordance index (C-index). Due to this objective function, the resulting prediction models are optimal with respect to their ability to discriminate between patients with longer and shorter survival times. The gradient boosting algorithm is combined with the stability selection approach to enhance and control its variable selection properties.ResultsThe resulting algorithm fits prediction models based on the rankings of the survival times and automatically selects only the most stable predictors. The performance of the approach, which works best for small numbers of informative predictors, is demonstrated in a large scale simulation study: C-index boosting in combination with stability selection is able to identify a small subset of informative predictors from a much larger set of non-informative ones while controlling the per-family error rate. In an application to discover biomarkers for breast cancer patients based on gene expression data, stability selection yielded sparser models and the resulting discriminatory power was higher than with lasso penalized Cox regression models.ConclusionThe combination of stability selection and C-index boosting can be used to select small numbers of informative biomarkers and to derive new prediction rules that are optimal with respect to their discriminatory power. Stability selection controls the per-family error rate which makes the new approach also appealing from an inferential point of view, as it provides an alternative to classical hypothesis tests for single predictor effects. Due to the shrinkage and variable selection properties of statistical boosting algorithms, the latter tests are typically unfeasible for prediction models fitted by boosting.

[1]  H. Binder,et al.  Extending Statistical Boosting , 2014, Methods of Information in Medicine.

[2]  J. Bergh,et al.  Strong Time Dependence of the 76-Gene Prognostic Signature for Node-Negative Breast Cancer Patients in the TRANSBIG Multicenter Independent Validation Series , 2007, Clinical Cancer Research.

[3]  Torsten Hothorn,et al.  Identifying Risk Factors for Severe Childhood Malnutrition by Boosting Additive Quantile Regression , 2011 .

[4]  Thomas A Gerds,et al.  Estimating a time‐dependent concordance index for survival prediction models with covariate dependent censoring , 2013, Statistics in medicine.

[5]  Matthias Schmid,et al.  A comparison of estimators to evaluate the discriminatory power of time‐to‐event models , 2012, Statistics in medicine.

[6]  J. Goeman L1 Penalized Estimation in the Cox Proportional Hazards Model , 2009, Biometrical journal. Biometrische Zeitschrift.

[7]  Peter Buhlmann,et al.  BOOSTING ALGORITHMS: REGULARIZATION, PREDICTION AND MODEL FITTING , 2007, 0804.2752.

[8]  R. Tibshirani The lasso method for variable selection in the Cox model. , 1997, Statistics in medicine.

[9]  Xiao Song,et al.  A semiparametric approach for the covariate specific ROC curve with survival outcome , 2008 .

[10]  Torsten Hothorn,et al.  Stability Selection with Error Control , 2015 .

[11]  F. Harrell,et al.  Regression modelling strategies for improved prognostic prediction. , 1984, Statistics in medicine.

[12]  James M. Robins,et al.  Unified Methods for Censored Longitudinal Data and Causality , 2003 .

[13]  M. Gonen,et al.  Concordance probability and discriminatory power in proportional hazards regression , 2005 .

[14]  M. Schmid,et al.  The Importance of Knowing When to Stop , 2012, Methods of Information in Medicine.

[15]  K. Chou,et al.  Using LogitBoost classifier to predict protein structural classes. , 2006, Journal of theoretical biology.

[16]  Xiaohui Xie,et al.  A Gradient Boosting Algorithm for Survival Analysis via Direct Optimization of Concordance Index , 2013, Comput. Math. Methods Medicine.

[17]  I. Langner Survival Analysis: Techniques for Censored and Truncated Data , 2006 .

[18]  Qi Long,et al.  Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic , 2016, Biometrics.

[19]  Benjamin Hofner,et al.  Model-based boosting in R: a hands-on tutorial using the R package mboost , 2012, Computational Statistics.

[20]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[21]  P. Heagerty,et al.  Survival Model Predictive Accuracy and ROC Curves , 2005, Biometrics.

[22]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[23]  Ralph B. D'Agostino,et al.  Evaluation of the Performance of Survival Analysis Models: Discrimination and Calibration Measures , 2003, Advances in Survival Analysis.

[24]  Elia Biganzoli,et al.  A time‐dependent discrimination index for survival data , 2005, Statistics in medicine.

[25]  P. Bühlmann,et al.  Boosting With the L2 Loss , 2003 .

[26]  Matthias Schmid,et al.  A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models , 2017, Statistical methods in medical research.

[27]  J. Wyatt,et al.  Commentary: Prognostic models: clinically useful or quickly forgotten? , 1995 .

[28]  Matthias Schmid,et al.  A Robust Alternative to the Schemper–Henderson Estimator of Prediction Error , 2011, Biometrics.

[29]  Bernd Bischl,et al.  The residual‐based predictiveness curve: A visual tool to assess the performance of prediction models , 2016, Biometrics.

[30]  M. Pencina,et al.  Overall C as a measure of discrimination in survival analysis: model specific population value and confidence interval estimation , 2004, Statistics in medicine.

[31]  J. Foekens,et al.  Gene-expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer , 2005, The Lancet.

[32]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[33]  Hans A. Kestler,et al.  On the validity of time-dependent AUC estimators , 2015, Briefings Bioinform..

[34]  Yudong D. He,et al.  Gene expression profiling predicts clinical outcome of breast cancer , 2002, Nature.

[35]  Benjamin Hofner,et al.  Controlling false discoveries in high-dimensional situations: boosting with stability selection , 2014, BMC Bioinformatics.

[36]  F. Harrell,et al.  Evaluating the yield of medical tests. , 1982, JAMA.

[37]  Hemant Ishwaran,et al.  Evaluating Random Forests for Survival Analysis using Prediction Error Curves. , 2012, Journal of statistical software.

[38]  Hong Liu,et al.  A systematic evaluation of high-dimensional, ensemble-based regression for exploring large model spaces in microbiome analyses , 2015, BMC Bioinformatics.

[39]  Michael W Kattan,et al.  Evaluating a New Marker’s Predictive Contribution , 2004, Clinical Cancer Research.

[40]  P. Bühlmann,et al.  Boosting with the L2-loss: regression and classification , 2001 .

[41]  M. Pencina,et al.  On the C‐statistics for evaluating overall adequacy of risk prediction procedures with censored survival data , 2011, Statistics in medicine.

[42]  E Graf,et al.  Assessment and comparison of prognostic classification schemes for survival data. , 1999, Statistics in medicine.

[43]  Torsten Hothorn,et al.  Testing the additional predictive value of high-dimensional molecular data , 2010, BMC Bioinformatics.

[44]  J. Klein,et al.  Survival Analysis: Techniques for Censored and Truncated Data , 1997 .

[45]  A Mayr,et al.  The Evolution of Boosting Algorithms , 2014, Methods of Information in Medicine.

[46]  S. Dudoit,et al.  Multiple Hypothesis Testing in Microarray Experiments , 2003 .

[47]  Trevor Hastie,et al.  Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent. , 2011, Journal of statistical software.

[48]  Tso-Jung Yen,et al.  Discussion on "Stability Selection" by Meinshausen and Buhlmann , 2010 .

[49]  Rajen Dinesh Shah,et al.  Variable selection with error control: another look at stability selection , 2011, 1105.5578.

[50]  Sabine Van Huffel,et al.  Support vector methods for survival analysis: a comparison between ranking and regression approaches , 2011, Artif. Intell. Medicine.

[51]  Ying Huang,et al.  Evaluating the ROC performance of markers for future events , 2008, Lifetime data analysis.

[52]  Hemant Ishwaran,et al.  Random Survival Forests , 2008, Wiley StatsRef: Statistics Reference Online.

[53]  Matthias Schmid,et al.  Boosting the Concordance Index for Survival Data – A Unified Framework To Derive and Evaluate Biomarker Combinations , 2013, PloS one.

[54]  Jean-Baptiste Veyrieras,et al.  A strategy to build and validate a prognostic biomarker model based on RT-qPCR gene expression and clinical covariates , 2015, BMC Bioinformatics.

[55]  John T. Kent,et al.  Measures of dependence for censored survival data , 1988 .

[56]  Jian Huang,et al.  Regularized ROC method for disease classification and biomarker selection with microarray data , 2005, Bioinform..

[57]  John O'Quigley,et al.  Explained randomness in proportional hazards models , 2005, Statistics in medicine.

[58]  Torsten Hothorn,et al.  A PAUC-based Estimation Technique for Disease Classification and Biomarker Selection , 2012, Statistical applications in genetics and molecular biology.

[59]  David Mease,et al.  Explaining the Success of AdaBoost and Random Forests as Interpolating Classifiers , 2015, J. Mach. Learn. Res..

[60]  Harald Binder,et al.  A weighting approach for judging the effect of patient strata on high-dimensional risk prediction signatures , 2015, BMC Bioinformatics.

[61]  Robert Tibshirani,et al.  Survival analysis with high-dimensional covariates , 2010, Statistical methods in medical research.

[62]  B. Yu,et al.  Boosting with the L_2-Loss: Regression and Classification , 2001 .

[63]  J. Friedman Special Invited Paper-Additive logistic regression: A statistical view of boosting , 2000 .