Comments on 'Sequential methods for random-effects meta-analysis' by J. P. Higgins, A. Whitehead and M. Simmonds, Statistics in Medicine 2010; DOI: 10.1002/sim.4088.

We wish to commend Higgins and colleagues on their recent article ‘Sequential methods for randomeffects meta-analysis’ [1]. Repeated updates of a meta-analysis are obviously mandatory if the information is to be kept up-to-date. As an adverse effect of these updates, repeated analyses increase the risk of type 1 error and can lead to inaccurate communication of uncertainty in conclusions [2, 3]. This increased risk has been ignored by many until now and the current version of The Cochrane Handbook does not discuss sequential multiplicity directly [4]. We hope that Higgins and colleagues, with their current publication, will do much to amend this omission. We agree entirely with Higgins and colleagues that there must be an emphasis on good empirical properties and that the approach must be relatively straightforward. At The Copenhagen Trial Unit, we have been using Trial Sequential Analysis (TSA) to conduct sequential analyses with the aim of adjusting for sparse data and sequential multiplicity [5, 6]. TSA uses the O’Brien-Fleming boundaries to monitor significance (and futility), as trials are added to cumulative meta-analysis. A prediction has to be made about the proportion in the control group with the outcome in question, the anticipated intervention effect size in the experimental group, the type 1 error, the type 2 error, and the expected ultimate heterogeneity. Based on this information, the required information size and the trial sequential monitoring boundaries are calculated [5, 6]. Like Higgins and colleagues, we consider the prediction of heterogeneity as a major challenge. Our approach has been to consider different realistic and relevant possibilities a priori and to explore the impact of different values of heterogeneity on the inferential results. As such, uncertainties in priors can be considered and discussed in terms of the uncertainties they cause in conclusions. Similar explorative analysis can be done by varying other variables, most notably the control group event proportion and the anticipated effect size. In an explorative spirit, we performed TSA on the bleeding peptic ulcer meta-analysis used by Higgins and colleagues [1]. We wondered what set of ‘prior predictions’ in TSA would correspond to the inverse gamma (IG) prior distributions. Given the size of the statistical heterogeneity in the full meta-analysis of 23 trials (I 2 =72 per cent), we decided to focus our comparison on the ‘approximate semi-Bayes IG (1.5, 1) sequential analysis’, for which significance was declared after 15 of the 23 trials. Using a type 1 error of 0.05, a type 2 error of 0.20, and using the included trials to estimate the heterogeneity and the control event proportion, we found—using TSA—that the meta-analysis crossed the significance boundary after the 15th trial when we challenged a relative odds ratio reduction of 25 per cent. The odds ratio at stopping was 0.38 with a sequentially adjusted 95 per cent confidence interval of 0.16–0.92. The heterogeneity-adjusted required information size was 2553 (Figure 1). We would very much like to hear the impression of Higgins and colleagues of this comparison. Is there a measure of information size incorporated in the assumptions for the approximate semi-Bayes sequential meta-analysis? For the approximate semi-Bayes technique, can the parameters of the prior be thought of in terms of any clinical parameters? Such as anticipated effect size? Or control event group