TO THE EDITOR: Trippa et al propose an outcome-adaptive randomization (AR) procedure that essentially ensures that the number of patients on the control arm will be approximately the same as the number of patients on the experimental arm with the most patients. As such, it is an improvement over standard AR procedures; these standard AR procedures are not useful, because they result in larger trials and can lead to bias in the trial results. However, there are two important things to note about the proposed AR procedure. First, if the trial has onlytwotreatmentarms(oneexperimentalarmandthecontrolarm),the proposed AR procedure reduces to an equal one-to-one randomization fixed-sample size trial design. We assume, therefore, that Trippa et al would agree with us that AR should not be used in this situation. The second thing to note is that AR procedures involve intensive interim analyses of the accruing data. It is well recognized that interim monitoring increases efficiency of clinical trials. Since the proposed AR procedure assigns less patients to trial arms that are doing poorly, in order to obtain a relevant assessment of its performance, it should be compared with a trial design that incorporates interim monitoring for futility/inefficacy. Interim monitoring for futility/inefficacy is a standard part of randomized clinical trial designs in which experimental treatment arms that are not doing sufficiently well as compared with the control treatment arm are dropped during the trial. To perform the comparison of the proposed AR procedure with a standard group sequential design (ie, equal randomization to the treatment arms with interim monitoring for futility/inefficacy), we consider trial designs with one control arm and three experimental arms, a total sample size of 140, and use response rate as the outcome. (Using response rates rather than survival data is beneficial to AR in comparisons in that outcomes being observed earlier allows for quicker adaptation.) Table 1 presents the power and average sample sizes using (1) an equal (balanced) randomization fixed sample-size design with no interim monitoring (35 patients on each arm), (2) the AR design as described in Trippa et al, and (3) an equal randomization design incorporating commonly used group sequential futility monitoring (see Table 1 footnotes for designs’ details). The results in Table 1 demonstrate that the proposed AR procedure compared to an equal randomization fixed sample-size design has higher power for experimental arms that work and smaller average sample sizes for experimental arms that do not, replicating the results of Trippa et al. For example, there is a power benefit of 0.08 (0.88-0.80) when there is a single active experimental arm (row 2 or Table 1) that can be compared with the 0.06 benefit seen in Trippa et al (row 1 of their Table 2). However, compared to a standard group sequential design there is no advantage of the proposed AR procedure, in terms of power or numbers of patients assigned to efficacious treatment arms. Trippa et al conclude “Bayesian adaptive designs in glioblastoma trials would result in trials requiring substantially fewer overall patients, with more patients being randomly assigned to efficacious arms.” Because AR approaches introduce complexity into the design, conduct and interpretation of clinical trials and require additional resources, the use of AR requires credible demonstration of tangible improvement relative to standard clinical trial design. This requirement has not been met for the Bayesian adaptive design proposed by Trippa et al.
[1]
Boris Freidlin,et al.
Reply to Y. Yuan et al
,
2011
.
[2]
B. Freidlin,et al.
Outcome--adaptive randomization: is it useful?
,
2011,
Journal of clinical oncology : official journal of the American Society of Clinical Oncology.
[3]
Josef Korinek,et al.
Proceedings of the American Society of Clinical Oncology
,
1982
.
[4]
S J Pocock,et al.
Interim analyses for randomized clinical trials: the group sequential approach.
,
1982,
Biometrics.
[5]
G. Parmigiani,et al.
Bayesian adaptive randomized trial design for patients with recurrent glioblastoma.
,
2012,
Journal of clinical oncology : official journal of the American Society of Clinical Oncology.
[6]
B. Freidlin,et al.
Monitoring for lack of benefit: a critical component of a randomized clinical trial.
,
2009,
Journal of clinical oncology : official journal of the American Society of Clinical Oncology.
[7]
W. R. Thompson.
ON THE LIKELIHOOD THAT ONE UNKNOWN PROBABILITY EXCEEDS ANOTHER IN VIEW OF THE EVIDENCE OF TWO SAMPLES
,
1933
.