Introduction to Discussion Papers on Draft FDA Guidance on Adaptive Designs

Congratulations to the Food and Drug Administration (FDA) on its draft guidance on adaptive design (AD). It is a huge step toward adopting an innovative approach and streamlining the process of adaptive trials. Since it is a draft, there is the opportunity to ask useful questions such as: (1) Does the draft guidance deliver what was intended to deliver? (2) Does it cover all important aspects? (3) Has the guidance addressed the most important AD questions the industry may have? (4) What are the differences between the views from regulators and others (academia and industry)? To answer these questions and provide quality feedback to FDA, we have invited several AD pioneers/experts from academia and industry to present their opinions. The opinions are diverse and reflect individual view and possible bias. However, collectively they provide a relatively unbiased “overall picture.” The papers are organized as follows: Leading off is the special article from the former PhRMA AD Working Group, “Viewpoints on the FDA Draft Adaptive Designs Guidance From the PhRMA Working Group”; this is followed by six discussion papers from industry and academia statisticians; and finally the special article by Qing and Chi, “Understanding the FDA Guidance on Adaptive Designs: Historical, Legal, and Statistical Perspectives.” Issues in AD are complicated. Some of them are unique to AD while others are common to all designs, whether adaptive, standard group sequential, or classical designs. It is important to differentiate which types of issues are associated with which designs and to differentiate real practical issues from those that are imaginary or theoretical in nature. Several standard technical terms in classical designs, such as bias, p value, and confidence interval, become unclear and are not uniquely defined in the context of adaptive designs. An example would be the concept of “bias.” Statistical bias is based on the repeated experiments. Is this concept still applicable to a single experiment, such as a single clinical trial? Even under repeated experiments, the amount of data the sponsor can access is larger than what the regulatory authorities see, which is larger than what a patient is provided. The regulatory agencies and doctors/patients see a subset of positive results (including both true positive and false positive results). In this sense, it is conditional bias. Therefore, bias is present in both designs and is not unique to AD. Similarly,