Prospective observational studies to assess comparative effectiveness: the ISPOR good research practices task force report.

OBJECTIVE In both the United States and Europe there has been an increased interest in using comparative effectiveness research of interventions to inform health policy decisions. Prospective observational studies will undoubtedly be conducted with increased frequency to assess the comparative effectiveness of different treatments, including as a tool for "coverage with evidence development," "risk-sharing contracting," or key element in a "learning health-care system." The principle alternatives for comparative effectiveness research include retrospective observational studies, prospective observational studies, randomized clinical trials, and naturalistic ("pragmatic") randomized clinical trials. METHODS This report details the recommendations of a Good Research Practice Task Force on Prospective Observational Studies for comparative effectiveness research. Key issues discussed include how to decide when to do a prospective observational study in light of its advantages and disadvantages with respect to alternatives, and the report summarizes the challenges and approaches to the appropriate design, analysis, and execution of prospective observational studies to make them most valuable and relevant to health-care decision makers. RECOMMENDATIONS The task force emphasizes the need for precision and clarity in specifying the key policy questions to be addressed and that studies should be designed with a goal of drawing causal inferences whenever possible. If a study is being performed to support a policy decision, then it should be designed as hypothesis testing-this requires drafting a protocol as if subjects were to be randomized and that investigators clearly state the purpose or main hypotheses, define the treatment groups and outcomes, identify all measured and unmeasured confounders, and specify the primary analyses and required sample size. Separate from analytic and statistical approaches, study design choices may strengthen the ability to address potential biases and confounding in prospective observational studies. The use of inception cohorts, new user designs, multiple comparator groups, matching designs, and assessment of outcomes thought not to be impacted by the therapies being compared are several strategies that should be given strong consideration recognizing that there may be feasibility constraints. The reasoning behind all study design and analytic choices should be transparent and explained in study protocol. Execution of prospective observational studies is as important as their design and analysis in ensuring that results are valuable and relevant, especially capturing the target population of interest, having reasonably complete and nondifferential follow-up. Similar to the concept of the importance of declaring a prespecified hypothesis, we believe that the credibility of many prospective observational studies would be enhanced by their registration on appropriate publicly accessible sites (e.g., clinicaltrials.gov and encepp.eu) in advance of their execution.

[1]  Douglas G Altman,et al.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. , 2014, International journal of surgery.

[2]  W. Ray,et al.  Evaluating medication effects outside of clinical trials: new-user designs. , 2003, American journal of epidemiology.

[3]  D. Rubin The design versus the analysis of observational studies for causal effects: parallels with the design of randomized trials , 2007, Statistics in medicine.

[4]  R. Lilford Response from chair of scientific advisory committee , 2010, BMJ : British Medical Journal.

[5]  J. Gardin,et al.  Low cardiovascular risk is associated with favorable left ventricular mass, left ventricular relative wall thickness, and left atrial size: the CARDIA study. , 2010, Journal of the American Society of Echocardiography : official publication of the American Society of Echocardiography.

[6]  Mark Sculpher,et al.  Transferability of economic evaluations across jurisdictions: ISPOR Good Research Practices Task Force report. , 2009, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[7]  D. Price,et al.  Leukotriene antagonists as first-line or add-on asthma-controller therapy. , 2011, The New England journal of medicine.

[8]  S. Goodman,et al.  Reproducible Research: Moving toward Research the Public Can Really Trust , 2007, Annals of Internal Medicine.

[9]  E. Vicaut,et al.  Extended-Duration Venous Thromboembolism Prophylaxis in Acutely Ill Medical Patients With Recently Reduced Mobility , 2010, Annals of Internal Medicine.

[10]  I. Olkin,et al.  Meta-analysis of observational studies in epidemiology - A proposal for reporting , 2000 .

[11]  B. Davis,et al.  Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). , 2002, JAMA.

[12]  new drug application (NDA) , 2009 .

[13]  Ian Harvey,et al.  A pragmatic–explanatory continuum indicator summary (PRECIS): a tool to help trial designers , 2009, Canadian Medical Association Journal.

[14]  T. Greene,et al.  A Simulation-Based Evaluation of Methods to Estimate the Impact of an Adverse Event on Hospital Length of Stay , 2007, Medical care.

[15]  Jeffrey M. Woodbridge Econometric Analysis of Cross Section and Panel Data , 2002 .

[16]  M. Buyse,et al.  Cancer Clinical Trials: Methods and Practice. , 1985 .

[17]  M. Epstein,et al.  Guidelines for good pharmacoepidemiology practices (GPP) , 2008 .

[18]  S. Pocock,et al.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies , 2007, The Lancet.

[19]  N. Dreyer,et al.  Registries for Evaluating Patient Outcomes: A User’s Guide , 2010 .

[20]  Noel S Weiss,et al.  The New World of Data Linkages in Clinical Epidemiology: Are We Being Brave or Foolhardy? , 2011, Epidemiology.

[21]  N. Dreyer,et al.  Effectiveness of antiviral treatment in human influenza A(H5N1) infections: analysis of a Global Patient Registry. , 2010, The Journal of infectious diseases.

[22]  Frederic S. Resnic,et al.  Long-Term Clinical Outcomes After Drug-Eluting and Bare-Metal Stenting in Massachusetts , 2008, Circulation.

[23]  F. Van de Werf,et al.  Intervention in acute coronary syndromes: do patients undergo intervention on the basis of their risk characteristics? The Global Registry of Acute Coronary Events (GRACE) , 2005, Heart.

[24]  D P Hartmann,et al.  Interrupted time-series analysis and its application to behavioral data. , 1980, Journal of applied behavior analysis.

[25]  H. Kraemer,et al.  Caution regarding the use of pilot studies to guide power calculations for study proposals. , 2006, Archives of general psychiatry.

[26]  A. LaCroix,et al.  The Global Longitudinal Study of Osteoporosis in Women (GLOW): rationale and study design , 2009, Osteoporosis International.

[27]  J. Avorn,et al.  Adjustments for Unmeasured Confounders in Pharmacoepidemiologic Database Studies Using External Information , 2007, Medical care.

[28]  W. Shadish,et al.  Experimental and Quasi-Experimental Designs for Generalized Causal Inference , 2001 .

[29]  Samy Suissa,et al.  Immortal time bias in pharmaco-epidemiology. , 2008, American journal of epidemiology.

[30]  J. Broadhead,et al.  WHO consensus statement. , 1990, The British journal of psychiatry : the journal of mental science.

[31]  N. Dreyer,et al.  Good Practices for Handling Adverse Events Detected through Patient Registries , 2008 .

[32]  Vanessa Didelez,et al.  Assumptions of IV methods for observational epidemiology , 2010, 1011.0595.

[33]  Uwe Siebert,et al.  Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retr , 2009, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[34]  S. Normand,et al.  Improving traditional intention-to-treat analyses: a new approach , 2005, Psychological Medicine.

[35]  Richard E Gliklich,et al.  GRACE principles: recognizing high-quality observational studies of comparative effectiveness. , 2010, The American journal of managed care.

[36]  Clarence Balden Randall,et al.  Report to the President and the Congress , 1954 .

[37]  D. Moher,et al.  Improving the reporting of pragmatic trials: an extension of the CONSORT statement , 2008, BMJ : British Medical Journal.

[38]  Tom Gross,et al.  A framework for evidence evaluation and methodological issues in implantable device studies. , 2010, Medical care.

[39]  Marianne Klemp,et al.  What principles should govern the use of managed entry agreements? , 2011, International Journal of Technology Assessment in Health Care.

[40]  S. Pocock,et al.  Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration , 2007, PLoS medicine.

[41]  M. Epstein Guidelines for good pharmacoepidemiology practices (GPP) , 2005, Pharmacoepidemiology and drug safety.

[42]  Amela,et al.  CONVENTIONAL-DOSE CHEMOTHERAPY COMPARED WITH HIGH-DOSE CHEMOTHERAPY PLUS AUTOLOGOUS HEMATOPOIETIC STEM-CELL TRANSPLANTATION FOR METASTATIC BREAST CANCER , 2000 .

[43]  James M. Robins,et al.  Observational Studies Analyzed Like Randomized Experiments: An Application to Postmenopausal Hormone Therapy and Coronary Heart Disease , 2008, Epidemiology.

[44]  Luciano Picarelli,et al.  Final Report 2008 , 2008 .

[45]  David Atkins,et al.  Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report--Part I. , 2009, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[46]  J. Lewsey,et al.  Variation in use of video assisted thoracic surgery in the United Kingdom , 2004, BMJ : British Medical Journal.

[47]  Using Multiple Control Groups and Matching to Address Unobserved Biases in Comparative Effectiveness Research , 2011, Statistics in biosciences.

[48]  J. Avorn,et al.  Increasing Levels of Restriction in Pharmacoepidemiologic Database Studies of Elderly and Comparison With Randomized Trial Results , 2007, Medical care.

[49]  Ellen Frank,et al.  Moderators of treatment outcomes: clinical, research, and policy importance. , 2006, JAMA.

[50]  T. Brennan,et al.  The controversy over high-dose chemotherapy with autologous bone marrow transplant for breast cancer. , 2001, Health affairs.

[51]  R. Weintraub,et al.  Public Funding of Bosentan for the Treatment of Pulmonary Artery Hypertension in Australia , 2012, PharmacoEconomics.

[52]  K. Alexander,et al.  Changes in glycoprotein IIb/IIIa inhibitor excess dosing with site-specific safety feedback in the Can Rapid risk stratification of Unstable angina patients Suppress ADverse outcomes with Early implementation of the ACC/AHA guidelines (CRUSADE) initiative. , 2010, American heart journal.

[53]  S. Williamson Patient access schemes for high-cost cancer medicines. , 2010, The Lancet. Oncology.

[54]  S. Pocock,et al.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. , 2008, Journal of clinical epidemiology.

[55]  J. Coyne,et al.  Research to improve the quality of care for depression: alternatives to the simple randomized clinical trial. , 2003, General hospital psychiatry.

[56]  U. Dafni Landmark Analysis at the 25-Year Landmark Point , 2011, Circulation. Cardiovascular quality and outcomes.

[57]  Richard L Kravitz,et al.  Evidence-based medicine, heterogeneity of treatment effects, and the trouble with averages. , 2004, The Milbank quarterly.

[58]  M. Graffar [Modern epidemiology]. , 1971, Bruxelles medical.

[59]  J. D. Miller Sharing clinical research data in the United States under the health insurance portability and accountability act and the privacy rule , 2010, Trials.

[60]  S. Martino,et al.  Conventional-dose chemotherapy compared with high-dose chemotherapy plus autologous hematopoietic stem-cell transplantation for metastatic breast cancer. Philadelphia Bone Marrow Transplant Group. , 2000, The New England journal of medicine.

[61]  Michael L. Johnson,et al.  Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report--Part III. , 2009, Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research.

[62]  Sander Greenland,et al.  Modern Epidemiology 3rd edition , 1986 .

[63]  Dylan S. Small,et al.  Sensitivity Analysis for Instrumental Variables Regression With Overidentifying Restrictions , 2007 .

[64]  B. Freedman Equipoise and the ethics of clinical research. , 1987, The New England journal of medicine.

[65]  Elizabeth A Stuart,et al.  Matching methods for causal inference: A review and a look forward. , 2010, Statistical science : a review journal of the Institute of Mathematical Statistics.