Can We Trust the Results of Meta-Analyses?: A Systematic Approach to Sensitivity Analysis in Meta-Analyses

Every meta-analysis involves a number of choices made by the analyst. These choices may refer to, for example, estimator of effect, model for analysis (fixed effects or random effects), or the treatment of varying study quality. The choices made can affect the results of the analysis. Every meta-analysis should therefore include a sensitivity analysis, designed to probe how choices made as part of the analysis affect its results. This paper describes a systematic approach to sensitivity analysis in meta-analyses. An index intended to summarize the results of a sensitivity analysis, the robustness score, is developed. The robustness score varies from 0 to 1. A value of 1 indicates that the results of a meta-analysis are robust; they are not at all affected by the choices made by the analyst. It is proposed that every meta-analysis include a sensitivity analysis for (a) the potential presence of publication bias, (b) the choice of estimator of effect (if relevant), (c) the possible presence of outlier bias (a single result having decisive influence on the summary estimate), (d) statistical weighting of individual estimates of effect, and (e) assessment of study quality. A recently reported meta-analysis of studies that have evaluated the effects on road safety of daytime running lights for cars is used as a case to explain the proposed approach to sensitivity analysis.