BackgroundRandom effects modelling is routinely used in clustered data, but for prediction models, random effects are commonly substituted with the mean zero after model development. In this study, we proposed a novel approach of including prior knowledge through the random effects distribution and investigated to what extent this could improve the predictive performance.MethodsData were simulated on the basis of a random effects logistic regression model. Five prediction models were specified: a frequentist model that set the random effects to zero for all new clusters, a Bayesian model with weakly informative priors for the random effects of new clusters, Bayesian models with expert opinion incorporated into low informative, medium informative and highly informative priors for the random effects. Expert opinion at the cluster level was elicited in the form of a truncated area of the random effects distribution. The predictive performance of the five models was assessed. In addition, impact of suboptimal expert opinion that deviated from the true quantity as well as including expert opinion by means of a categorical variable in the frequentist approach were explored. The five models were further investigated in various sensitivity analyses.ResultsThe Bayesian prediction model using weakly informative priors for the random effects showed similar results to the frequentist model. Bayesian prediction models using expert opinion as informative priors showed smaller Brier scores, better overall discrimination and calibration, as well as better within cluster calibration. Results also indicated that incorporation of more precise expert opinion led to better predictions. Predictive performance from the frequentist models with expert opinion incorporated as categorical variable showed similar patterns as the Bayesian models with informative priors. When suboptimal expert opinion was used as prior information, results indicated that prediction still improved in certain settings.ConclusionsThe prediction models that incorporated cluster level information showed better performance than the models that did not. The Bayesian prediction models we proposed, with cluster specific expert opinion incorporated as priors for the random effects showed better predictive ability in new data, compared to the frequentist method that replaced random effects with zero after model development.
[1]
C.J.H. Mann,et al.
Clinical Prediction Models: A Practical Approach to Development, Validation and Updating
,
2009
.
[2]
E. Steyerberg,et al.
Reporting and Methods in Clinical Prediction Research: A Systematic Review
,
2012,
PLoS medicine.
[3]
Timothy J. Robinson,et al.
Multilevel Analysis: Techniques and Applications
,
2002
.
[4]
C. Robert.
Simulation of truncated normal variables
,
2009,
0907.4010.
[5]
M. A. Best.
Bayesian Approaches to Clinical Trials and Health‐Care Evaluation
,
2005
.
[6]
Yvonne Vergouwe,et al.
A calibration hierarchy for risk models was defined: from utopia to empirical data.
,
2016,
Journal of clinical epidemiology.
[7]
D. Mark,et al.
Clinical prediction models: are we building better mousetraps?
,
2003,
Journal of the American College of Cardiology.
[8]
D. Bates,et al.
Linear Mixed-Effects Models using 'Eigen' and S4
,
2015
.
[9]
Jeremy E. Oakley,et al.
Uncertain Judgements: Eliciting Experts' Probabilities
,
2006
.
[10]
Yvonne Vergouwe,et al.
Prediction models for clustered data: comparison of a random intercept and standard regression model
,
2013,
BMC Medical Research Methodology.
[11]
G. A. Marcoulides.
Multilevel Analysis Techniques and Applications
,
2002
.
[12]
Brian S. Finkelman,et al.
The prediction accuracy of dynamic mixed-effects models in clustered data
,
2016,
BioData Mining.