Improved Outcome Prediction Across Data Sources Through Robust Parameter Tuning

In many application areas, prediction rules trained based on high-dimensional data are subsequently applied to make predictions for observations from other sources, but they do not always perform well in this setting. This is because data sets from different sources can feature (slightly) differing distributions, even if they come from similar populations. In the context of high-dimensional data and beyond, most prediction methods involve one or several tuning parameters. Their values are commonly chosen by maximizing the cross-validated prediction performance on the training data. This procedure, however, implicitly presumes that the data to which the prediction rule will be ultimately applied, follow the same distribution as the training data. If this is not the case, less complex prediction rules that slightly underfit the training data may be preferable. Indeed, a tuning parameter does not only control the degree of adjustment of a prediction rule to the training data, but also, more generally, the degree of adjustment to the distribution of the training data. On the basis of this idea, in this paper we compare various approaches including new procedures for choosing tuning parameter values that lead to better generalizing prediction rules than those obtained based on cross-validation. Most of these approaches use an external validation data set. In our extensive comparison study based on a large collection of 15 transcriptomic data sets, tuning on external data and robust tuning with a tuned robustness parameter are the two approaches leading to better generalizing prediction rules.

[1]  J. Ioannidis,et al.  External validation of new risk prediction models is infrequent and reveals worse prognostic discrimination. , 2015, Journal of clinical epidemiology.

[2]  Jasper Snoek,et al.  Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.

[3]  Limsoon Wong,et al.  Why Batch Effects Matter in Omics Data, and How to Avoid Them. , 2017, Trends in biotechnology.

[4]  G. Collins,et al.  External validation of multivariable prediction models: a systematic review of methodological conduct and reporting , 2014, BMC Medical Research Methodology.

[5]  Trevor Hastie,et al.  Regularization Paths for Generalized Linear Models via Coordinate Descent. , 2010, Journal of statistical software.

[6]  Rory Wilson,et al.  A measure of the impact of CV incompleteness on prediction error estimation with application to PCA and normalization , 2015, BMC Medical Research Methodology.

[7]  Giovanni Parmigiani,et al.  The impact of different sources of heterogeneity on loss of accuracy from genomic prediction models. , 2018, Biostatistics.

[8]  Sayan Mukherjee,et al.  Choosing Multiple Parameters for Support Vector Machines , 2002, Machine Learning.

[9]  Giovanni Parmigiani,et al.  The impact of different sources of heterogeneity on loss of accuracy from genomic prediction models , 2018, bioRxiv.

[10]  M. Akritas,et al.  NonpModelCheck: An R Package for Nonparametric Lack-of-Fit Testing and Variable Selection , 2017 .

[11]  B. Yu,et al.  Boosting with the L 2-loss regression and classification , 2001 .

[12]  D E Grobbee,et al.  External validation is necessary in prediction research: a clinical example. , 2003, Journal of clinical epidemiology.

[13]  John Quackenbush,et al.  Multiple-laboratory comparison of microarray platforms , 2005, Nature Methods.

[14]  P. Bühlmann,et al.  Boosting with the L2-loss: regression and classification , 2001 .

[15]  Bernd Bischl,et al.  mlr: Machine Learning in R , 2016, J. Mach. Learn. Res..

[16]  W. Huber,et al.  Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2 , 2014, Genome Biology.

[17]  Vladimir,et al.  Choosing Multiple Parameters forSupport , 2000 .

[18]  A. E. Hoerl,et al.  Ridge regression: biased estimation for nonorthogonal problems , 2000 .

[19]  Bart De Moor,et al.  Hyperparameter Search in Machine Learning , 2015, ArXiv.

[20]  Stéphanie Bougeard,et al.  MINT: a multivariate integrative method to identify reproducible molecular signatures across independent experiments and platforms , 2016, BMC Bioinformatics.

[21]  David Causeur,et al.  Improving cross‐study prediction through addon batch effect adjustment or addon normalization , 2016, Bioinform..

[22]  P. Bühlmann,et al.  Boosting With the L2 Loss , 2003 .

[23]  Anne-Laure Boulesteix,et al.  Cross-study validation for the assessment of prediction algorithms , 2014, Bioinform..

[24]  S. Aḥituv,et al.  The Measure , 1972 .

[25]  Frank Dondelinger,et al.  The joint lasso: high-dimensional regression for group structured data , 2018, Biostatistics.

[26]  Roman Hornung,et al.  Preparation of high-dimensional biomedical data with a focus on prediction and error estimation , 2016 .

[27]  Shih-Wei Lin,et al.  Particle swarm optimization for parameter determination and feature selection of support vector machines , 2008, Expert Syst. Appl..

[28]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[29]  Bernd Bischl,et al.  mlrMBO: A Modular Framework for Model-Based Optimization of Expensive Black-Box Functions , 2017, 1703.03373.

[30]  Kurt Hornik,et al.  Misc Functions of the Department of Statistics, ProbabilityTheory Group (Formerly: E1071), TU Wien , 2015 .

[31]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[32]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[33]  Jubilant J. Kizhakkethottam,et al.  Efficient Diagnosis of Cancer from Histopathological Images By Eliminating Batch Effects , 2016 .

[34]  Andreas Ziegler,et al.  ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R , 2015, 1508.04409.

[35]  Richard Simon,et al.  Bias in error estimation when using cross-validation for model selection , 2006, BMC Bioinformatics.

[36]  Cheng Soon Ong,et al.  Multivariate spearman's ρ for aggregating ranks using copulas , 2016 .

[37]  A. Scherer Batch Effects and Noise in Microarray Experiments , 2009 .

[38]  Jennifer A. Tom,et al.  Identifying and mitigating batch effects in whole genome sequencing data , 2017, BMC Bioinformatics.

[39]  Andreas Scherer,et al.  Batch Effects and Noise in Microarray Experiments: Sources and Solutions , 2009 .

[40]  David M. Simcha,et al.  Tackling the widespread and critical impact of batch effects in high-throughput data , 2010, Nature Reviews Genetics.