We would like to draw your attention to the implications for oncologic drug development of the article by Rothmann et al. published in the January 2003 special edition of SIM on noninferiority trials [1]. The methods described in this article are increasingly used by regulators in the U.S.A. and Europe to evaluate the design and analysis of trials of new agents. The consequences for trial size are enormous. Shlaes and Moellering have expressed closely related concerns for anti-infective drug development [2]. There has been something of a paradigm shift in the approach to cancer treatment over recent years. Academia and industry alike are now fully engaged in the discovery, research and development of novel, well tolerated, biologically targeted (cytostatic) anticancer agents. It is hoped that these new treatments will o er signi cant advantages to patients in terms of improved tolerability, but they may not always demonstrate increased e cacy. This naturally leads to the use of active-control, non-inferiority trials to compare the new agent with a standard agent, the conventional aim being to show no clinically relevant loss of e cacy. Such trials are often designed to demonstrate that the new treatment retains some fraction of the established e ect of the standard, say at least 1/2. Note that this fraction is essentially arbitrary and no regulatory guidance currently mandates this as the minimum amount either to demonstrate clinical non-inferiority or to secure regulatory approval. If the standard treatment was previously shown to double survival in a particular disease setting (hazard ratio = 0:50; p=0:02, say), and the goal for a new, better tolerated therapy is to retain at least 1/2 of this e ect, a routine sample size calculation shows that a total of 350 events is required to provide 90 per cent power at the one-sided, 2.5 per cent signi cance level. There are several important issues associated with the design and analysis of non-inferiority trials, including ‘constancy’—the extent to which the standard treatment performs as it did in previous trials—and ‘assay sensitivity’—the ability of a non-inferiority trial to detect a real di erence between the treatments compared. Much has been published in this area. The regulatory guidelines ICH E9 and E10 describe the issues in detail and provide some general guidance with respect to trial design and conduct [3; 4]. An issue not addressed in these guidelines arises from the fact that the standard e ect is an estimate from earlier work and so is not known with certainty. Sample size calculations often ignore this uncertainty. Hung et al. have shown that this approach increases the probability of erroneously accepting the e cacy of a truly inferior drug [5]. The approach o ered by Rothmann tackles this issue. Assuming constancy of the e ect of the standard and accepting assay sensitivity, Rothmann proposes a formal statistical com-
[1]
Mark Rothmann,et al.
Design and analysis of non‐inferiority mortality trials in oncology
,
2002,
Statistics in medicine.
[2]
Yi Tsong,et al.
Some fundamental issues with non‐inferiority testing in active controlled trials
,
2002,
Statistics in medicine.
[3]
Yi Tsong,et al.
Utility and pitfalls of some statistical methods in active controlled clinical trials.
,
2002,
Controlled clinical trials.
[4]
D. Shlaes,et al.
The United States Food and Drug Administration and the End of Antibiotics
,
1935
.
[5]
L. Fisher,et al.
Active-control trials: how would a new agent compare with placebo? A method illustrated with clopidogrel, aspirin, and placebo.
,
2001,
American heart journal.