Finding Alternatives to the Dogma of Power Based Sample Size Calculation: Is a Fixed Sample Size Prospective Meta-Experiment a Potential Alternative?

Sample sizes for randomized controlled trials are typically based on power calculations. They require us to specify values for parameters such as the treatment effect, which is often difficult because we lack sufficient prior information. The objective of this paper is to provide an alternative design which circumvents the need for sample size calculation. In a simulation study, we compared a meta-experiment approach to the classical approach to assess treatment efficacy. The meta-experiment approach involves use of meta-analyzed results from 3 randomized trials of fixed sample size, 100 subjects. The classical approach involves a single randomized trial with the sample size calculated on the basis of an a priori-formulated hypothesis. For the sample size calculation in the classical approach, we used observed articles to characterize errors made on the formulated hypothesis. A prospective meta-analysis of data from trials of fixed sample size provided the same precision, power and type I error rate, on average, as the classical approach. The meta-experiment approach may provide an alternative design which does not require a sample size calculation and addresses the essential need for study replication; results may have greater external validity.

[1]  Sally Hopewell,et al.  Publication bias in clinical trials due to statistical significance or direction of trial results. , 2009, The Cochrane database of systematic reviews.

[2]  Mike Clarke,et al.  Accumulating Research: A Systematic Account of How Cumulative Meta-Analyses Would Have Provided Knowledge, Improved Health, Reduced Harm and Saved Resources , 2014, PloS one.

[3]  J. Ioannidis Contradicted and initially stronger effects in highly cited clinical research. , 2005, JAMA.

[4]  B. Giraudeau,et al.  Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial , 2015, PloS one.

[5]  Iain Chalmers,et al.  How to increase value and reduce waste when research priorities are set , 2014, The Lancet.

[6]  C. Grady,et al.  What makes clinical research ethical? , 2000, JAMA.

[7]  R. Kay Statistical Principles for Clinical Trials , 1998, The Journal of international medical research.

[8]  M. Segal,et al.  Simple, Defensible Sample Sizes Based on Cost Efficiency , 2008, Biometrics.

[9]  Andrew J Vickers,et al.  Underpowering in randomized trials reporting a sample size calculation. , 2003, Journal of clinical epidemiology.

[10]  Anne Whitehead,et al.  Sequential methods for random-effects meta-analysis , 2010, Statistics in medicine.

[11]  Sally Hopewell,et al.  Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting , 2010, The Lancet.

[12]  Sally Hopewell,et al.  Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report. , 2007, Journal of the Royal Society of Medicine.

[13]  George F Borm,et al.  Publication bias was not a good reason to discourage trials with low power. , 2009, Journal of clinical epidemiology.

[14]  Geoffrey Norman,et al.  Sample size calculations: should the emperor’s clothes be off the peg or made to measure? , 2012, BMJ : British Medical Journal.

[15]  David J Spiegelhalter,et al.  A re-evaluation of random-effects meta-analysis , 2009, Journal of the Royal Statistical Society. Series A,.

[16]  Nicola J Cooper,et al.  The use of systematic reviews when designing studies , 2005, Clinical trials.

[17]  Philippe Ravaud,et al.  Reporting of sample size calculation in randomised controlled trials: review , 2009, BMJ : British Medical Journal.

[18]  T. Lang,et al.  Clinical Trials Have Gone Global: Is This a Good Thing? , 2012, PLoS medicine.

[19]  K. Schulz,et al.  Sample size calculations in randomised trials: mandatory and mystical , 2005, The Lancet.

[20]  Ulrich Mansmann,et al.  Sample size determinations in original research protocols for randomised clinical trials submitted to UK research ethics committees: review , 2013, BMJ.

[21]  Isabelle Boutron,et al.  Single-Center Trials Show Larger Treatment Effects Than Multicenter Trials: Evidence From a Meta-epidemiologic Study , 2011, Annals of Internal Medicine.

[22]  N. Chapman,et al.  Effects of ACE inhibitors, calcium antagonists, and other blood-pressure-lowering drugs: results of prospectively designed overviews of randomised trials , 2000, The Lancet.

[23]  N. Laird,et al.  Meta-analysis in clinical trials. , 1986, Controlled clinical trials.

[24]  Simon G Thompson,et al.  Predicting the extent of heterogeneity in meta-analysis, using empirical data from the Cochrane Database of Systematic Reviews , 2012, International journal of epidemiology.

[25]  W. Gillespie,et al.  Meta-analyses of small numbers of trials often agree with longer-term results. , 2011, Journal of clinical epidemiology.

[26]  F. Turnbull Effects of different blood-pressure-lowering regimens on major cardiovascular events: results of prospectively-designed overviews of randomised trials , 2003, The Lancet.

[27]  G. Mcinnes,et al.  The Unintended Consequences of Clinical Trials Regulations , 2009, PLoS medicine.

[28]  Mike Clarke,et al.  Doing New Research? Don't Forget the Old , 2004, PLoS medicine.