Commentary: how to report instrumental variable analyses (suggestions welcome).

Instrumental variable (IV) methods are becoming mainstream in comparative effectiveness research, but IV methods are radically different from traditional epidemiologic methods. the goal of IV methods is to eliminate confounding without ever measuring the confounders. this apparent miracle can be achieved only when four conditions are met (see below). Here we suggest a checklist for investigators who use IV methods. Like others before, we hope this step-by-step guide will improve the reporting of IV estimates and increase the transparency of the underlying assumptions. our discussion focuses on reports of causal effects of medical interventions and is informed by two papers by Davies et al that appear in this issue of EpidEmiology: an application that estimates the causal effects of cyclooxygenase-2 (CoX-2) selective versus nonselective nonsteroidal anti-inflammatory drugs (nSaIDs), and a literature review of IV papers that describes their use, and perhaps misuse, in epidemiology. We supplement this review with additional information from our own review of IV analyses of observational studies with a relatively well-defined medical intervention. Details of our review can be found in the online supplement (http://links.lww.com/eDe/a664). IV methods require a variable—the “instrument”—that meets the three so-called instrumental conditions: (1) the instrument is associated with the treatment, (2) the instrument does not affect the outcome except through treatment (also known as the exclusion restriction assumption), and (3) the instrument does not share any causes with the outcome. an example of such an instrument is the randomization indicator in double-blind randomized experiments. Davies et al summarize instruments that have been proposed in epidemiologic studies, including the physician preference type they use themselves. Unfortunately, no variable can be proved to be an instrument in observational studies because only condition (1) can be empirically verified. We outline some steps for reporting IV analyses based on the variables proposed as instruments (Figure) and discuss how these steps have been reported in previous studies (table). a detailed specification of how IV methods should be implemented is beyond the scope of this commentary. the first step in our reporting flowchart is to empirically verify condition (1). When the association between the proposed instrument and the treatment is weak, the proposed “weak” instrument may amplify biases due to small violations of conditions (2) or (3), producing very biased effect estimates. alternatively, if the proposed instrument is very strong, it may be more likely to violate conditions (2) or (3); in the extreme, a perfect correlation between the proposed instrument and the observational treatment implies the proposed instrument is associated with the same set of (possibly unmeasured) confounders as the treatment. most prior studies have evaluated the strength of their proposed instrument

[1]  Frank Windmeijer,et al.  COX-2 Selective Nonsteroidal Anti-inflammatory Drugs and Risk of Gastrointestinal Tract Complications and Myocardial Infarction: An Instrumental Variable Analysis , 2013, Epidemiology.

[2]  Neil M Davies,et al.  Issues in the reporting and conduct of instrumental variable studies: a systematic review. , 2013, Epidemiology.

[3]  J. Robins,et al.  American Journal of Epidemiology Practice of Epidemiology Credible Mendelian Randomization Studies: Approaches for Evaluating the Instrumental Variable Assumptions , 2022 .

[4]  B. Briesacher,et al.  Use of instrumental variable in prescription drug research with observational data: a systematic review. , 2011, Journal of clinical epidemiology.

[5]  Dylan S. Small,et al.  Building a Stronger Instrument in an Observational Study of Perinatal Care for Premature Infants , 2010 .

[6]  Sebastian Schneeweiss,et al.  Instrumental variable methods in comparative safety and effectiveness research , 2010, Pharmacoepidemiology and drug safety.

[7]  J. Kmenta Mostly Harmless Econometrics: An Empiricist's Companion , 2010 .

[8]  James M. Robins,et al.  Analysis of the Binary Instrumental Variable Model , 2010 .

[9]  G. Imbens,et al.  Better Late than Nothing: Some Comments on Deaton (2009) and Heckman and Urzua (2009) , 2009 .

[10]  Judea Pearl,et al.  Imperfect Experiments: Bounding Effects and Counterfactuals , 2009 .

[11]  Dylan S. Small,et al.  War and Wages , 2008 .

[12]  Sebastian Schneeweiss,et al.  Preference-Based Instrumental Variable Methods for the Estimation of Treatment Effects: Assessing Validity and Interpreting Results , 2007, The international journal of biostatistics.

[13]  Dylan S. Small,et al.  Sensitivity Analysis for Instrumental Variables Regression With Overidentifying Restrictions , 2007 .

[14]  Dylan S. Small,et al.  War and Wages : The Strength of Instrumental Variables and Their Sensitivity to Unobserved Biases , 2007 .

[15]  Dylan S. Small,et al.  Bounds on causal effects in three‐arm trials with non‐compliance , 2006 .

[16]  J. Robins,et al.  Instruments for Causal Inference: An Epidemiologist's Dream? , 2006, Epidemiology.

[17]  Wiebe R. Pestman,et al.  Instrumental Variables: Application and Limitations , 2006, Epidemiology.

[18]  J. Robins Structural Nested Failure Time Models , 2005 .

[19]  S. Greenland Quantifying Biases in Causal Models: Classical Confounding vs Collider-Stratification Bias , 2003, Epidemiology.

[20]  M. Hernán,et al.  Causal knowledge as a prerequisite for confounding evaluation: an application to birth defects epidemiology. , 2002, American journal of epidemiology.

[21]  G. Shaw,et al.  Maternal pesticide exposure from multiple sources and selected congenital anomalies. , 1999 .

[22]  J. Pearl,et al.  Causal diagrams for epidemiologic research. , 1999, Epidemiology.

[23]  E. Korn,et al.  Clinician Preferences and the Estimation of Causal Treatment Differences , 1998 .

[24]  J. Pearl,et al.  Bounds on Treatment Effects from Studies with Imperfect Compliance , 1997 .

[25]  David A. Jaeger,et al.  Problems with Instrumental Variables Estimation when the Correlation between the Instruments and the Endogenous Explanatory Variable is Weak , 1995 .

[26]  J. Robins Correcting for non-compliance in randomized trials using structural nested mean models , 1994 .

[27]  Joshua D. Angrist,et al.  Identification of Causal Effects Using Instrumental Variables , 1993 .

[28]  J. Angrist,et al.  Identification and Estimation of Local Average Treatment Effects , 1994 .

[29]  H. Morgenstern,et al.  Standardized regression coefficients: a further critique and review of some alternatives. , 1991, Epidemiology.

[30]  S. Greenland,et al.  Re: "The fallacy of employing standardized regression coefficients and correlations as measures of effect". , 1987, American journal of epidemiology.

[31]  S Greenland,et al.  The fallacy of employing standardized regression coefficients and correlations as measures of effect. , 1986, American journal of epidemiology.

[32]  David Card,et al.  WORKING PAPER SERIES BETTER LATE THAN NOTHING : SOME COMMENTS ON DEATON ( 2009 ) AND HECKMAN AND URZUA , 2022 .