Sensible guidelines for the conduct of large randomized trials

Since the first randomized controlled trial (RCT) in the 1950s, RCTs have had an enormous influence on the evaluation of various forms of interventions (both preventative and therapeutic) in a large range of medical conditions. The design and conduct of trials have evolved considerably and RCTs have become the ‘gold’ standard for the evaluation of therapies. Initial trials were funded by governments and were designed and conducted largely by academic investigators. In recent times, the majority of trials are funded by industry and often run by for-profit contract research organizations. Over the last 25 years, there have also been marked changes in the design and conduct of RCTs. While some of these changes have been positive, many are not. A positive development was the increasing recognition that many treatments were likely to have, at best, moderate benefits (or harm), which has led to very large trials, at times involving several thousands or even several tens of thousands of subjects. Although many have argued for extreme simplicity in various aspects of trial conduct, most large trials involve various degrees of, and at times substantial, complexity. Consequently, these trials have become extremely expensive to conduct. When trials evaluate new or patented products or devices, the sponsors of these products have sufficient resources to invest tens of millions and at times even over a hundred million dollars into a single trial. What has caused the cost of trials to skyrocket? Are the complexities and high costs necessary and justifiable? What are the associated opportunity costs? There is little evidence that many procedures and processes for RCTs that are widely accepted, or even required, have improved the trials. Regulatory guidelines for the conduct of RCTs (e.g., Good Clinical Practice [GCP] guidelines) are being held as the new standard, particularly the described standards for study monitoring, and the flexibility in the GCP guidelines is not often utilized. What is the basis for asserting that the procedures enshrined in these GCPs are indeed good, clinically relevant, or even practical? Collection of excess data, number of visits, and in particular onsite monitoring, considerably increase the cost of trials. While the intent of these guidelines is to improve the quality of trials, do we really know whether any improvements have been worthwhile or whether the additional resources could have been spent more effectively? Indeed, are there alternate and more efficient ways to enhance the quality of trials? Should there be flexible and differing approaches based on the stage of development of various interventions? How much of the complexity is driven by actual government regulations vs. overly cautious interpretation by industry regulatory departments? In recent years, a growing number of approvals (national, local, institutional) are required before trials can be initiated. These multiple approvals are lengthy, and can delay the initiation of studies by a year or two, and add considerably to the costs of RCTs. Are these multiple approvals of any value? Are there simpler ways to safeguard the science and ethics of trials? For a scientific method that is at the heart of evidence-based medicine, there is no good evidence that the layers of complexity, approvals, processes, and laws to protect subjects entering RCTs have actually achieved their purpose. What is clear is that such processes are extremely expensive and delay studies. At times, they even prevent the conduct of important trials of generic questions, especially those that are not supported by industry. We are concerned that multiple layers of complexity, approvals, and laws may actually be damaging public health by preventing the conduct of important trials of major public health importance. Over the last decade, many thoughtful leaders in clinical trials methodology have questioned current practices. In order to get a broad perspective from academia, regulators, and industry, we convened a two-day workshop in Washington, DC on January 25 and 26, 2007, in which 66 scientists participated. These scientists drawn from around the world involved clinical trial practitioners from industry, academia, and government. Their deliberations are summarized in six position papers, which are