Adverse Events: The More You Search, the More You Find

People want reliable information about potential harms of medications. Concerns about possible adverse effects guide therapy selections, and unpleasant surprises about unsuspected harms cause anxiety and make headlines (1). We often rely on compendia and product inserts for information about such effects. These materials offer litanies of possible adverse events, sometimes accompanied by an estimate of how often those events might occur. From whence are these estimates derived? What do they really mean? How can we better measure and understand how many and what kinds of harms may be caused by medications? Sources of Evidence about Medication-Related Harms We can identify medication-related harms from any of a variety of sources of evidence, including case reports, observational studies, and randomized trials (2). The various sources (for example, observational studies vs. randomized trials) may provide different estimates and inferences about the harms (3). The optimal source for assessing the harm depends on many factors. These include whether the intent is to establish causality; the underlying frequency and severity of the event; previous expectations and knowledge about the potential event; how easily the event can be measured; how soon the event is likely to occur after the start of therapy; and whether the event is reversible. Compendia and product inserts usually list severe or unusual adverse events that are allegedly associated with a particular drug. The authors of these materials often derive this information from sources such as case reports, postmarketing surveillance data, or observational studies (4). The frequency and causality of the events are often unclear because the source may not have evaluated a sufficient sample of people at risk for an adverse event, or appropriate comparison groups may have been lacking. Knowing whether an adverse event occurs in 0.1%, 1%, or 10% of patients requires reliable numerators (numbers of patients with adverse events) and denominators (numbers of patients exposed to an intervention). When there is not a clear causeeffect link between harms and exposures, knowing how many of the observed harms can be fairly attributed to a medication also requires comparisons with groups that did not receive the medication. Compendia and product inserts also list rates of minor and serious adverse events that were observed in single randomized, controlled trials. The trials may have been small and designed specifically to test efficacy. Such trials may be reasonable sources for detecting common, expected adverse events that occur shortly after exposure, but they usually do not detect rare or delayed events. They may miss unexpected events or ones that are difficult to measure. Furthermore, the rates of occurrence of adverse events that are reported in trials designed to test drug efficacy may underestimate the frequency or severity of harms seen in practice. Investigators typically conduct the trials under controlled dosing and monitoring conditions in groups of patients who have few comorbidities and are not using many concomitant medications. Meaning of the Estimates In this issue, Bent and colleagues (5) remind us that how we define and look for problems markedly affects the numbers of adverse effects that patients reportthe more aggressively we search, the more we find. Patients who were given a checklist of 53 possible adverse events reported 20-fold more events than patients who answered open-ended questions about recent adverse events. The study did not address which method of elicitation was best or whether problems elicited through detailed checklists might routinely overestimate clinically relevant adverse events. But clearly, what we ask patients to report (for example, any symptom or only those symptoms that we perceive to be a major harm) and how we ask will affect the frequency of reported adverse events. Patients' perceptions about therapeutic alternatives are another factor that can affect the frequencies of reported harms. When the first pivotal trial of triple therapy was conducted in HIV-infected patients, 1 group of patients was assigned triple therapy that included indinavir. Of this group, only 1% discontinued treatment because of adverse events (6). Most patients who discontinued indinavir had no other good options. Two years later, when the same regimen was compared with one containing efavirenz in place of indinavir, 21% of the patients receiving treatment with indinavir withdrew because of adverse events and another 22% withdrew for other reasons (7). Thus, patients' judgments about tolerable harm changed dramatically when they felt they had effective alternatives. Sometimes investigators or sponsors may change the definition or manner in which harms are assessed to show that new medications have a better toxicity profile than older treatments. When new nonsteroidal anti-inflammatory drugs were developed, investigators used endoscopy to document minor drug-related erosions of the gastric mucosa, whereas previous investigators relied on clinical evaluations to determine the presence of drug-induced gastrointestinal bleeding. Some of the newer drugs, therefore, were claimed to be less harmful than the older ones. Of interest, the absolute rates of adverse effects related to both newer and older drugs were higher than estimates made before this shift in definitions of harm (8). Finally, multiple (and sometimes subtle) details of studies affect the frequency of reported harms. These include the number and timing of follow-up visits at which adverse events are assessed; how forms are administered or completed; whether expected and unexpected events are assessed similarly; and whether the events must be judged as drug-related and, if so, whether that attribution is made by someone masked to the treatment assignment. Improving Measurement and Understanding Randomized, controlled trials will remain a critical source of information about medication-related harms because they provide frequency estimates, evidence about causality, and a fair comparison between treated and untreated groups. Trial design, reporting, and interpretation, however, need improvement (9). When designing trials, we must think more carefully about what harms to measure, how to grade their seriousness and severity, and how to address any unmasked assessments. We also must more often conduct trials in settings and patients similar to those in which drugs are used in practice. When reporting trials, we should follow CONSORT (Consolidated Standards of Reporting Trials) guidelines and specify exactly how and when harms-related information was collected (10). When interpreting results, we should heed the advice of authors of a research letter in this issue: It is almost always inappropriate to make statements about no difference in adverse event rates between groups on the basis of nonsignificant P values (11). Rates of adverse events that are derived from single, modest-sized trials that are not statistically different typically do not exclude with certainty the possibility of major, clinically important differences in harms between groups (10, 12). To improve the reliability of information about medication-related harms, authors of compendia and product inserts should strive more often to report adverse event rates that reflect combinations of data from multiple trials. Unfortunately, even with excellent design and reporting, we might not be able to combine data across trials if they use disparate definitions and methods to assess harms. To alleviate this problem, we recommend that more fields follow the leads of rheumatology, vaccine research, and oncology, which have developed standardized definitions of harms so that increasingly all trials in those fields have some common reference (13-15). Clinical research could then be planned to integrate standardized harms-related information across many trials to generate reliable, large-scale evidence. Regardless of improvements that we may see in the area of measuring adverse effects, we think that regulatory agencies, authors of compendia, practicing physicians, and the general public must better appreciate the limitations of what can be learned about medication-related harms from randomized trials and other studies. Case reports and observational studies will remain essential for identifying late-appearing and uncommon serious adverse events (4, 16), unexpected and difficult-to-measure harms, and particular patient populations at high risk for harmful side effects. We should support efforts aimed at making postmarketing drug surveillance more routine and efficient (17). Suboptimal methods of identifying and reporting medication-related harms must be remedied (Table). In the absence of standardized definitions and methods, reported toxicity rates can be manipulated to the point that they become virtually meaningless. We must no longer accept confusing lists of noncomparable percentages of adverse events for clinical or for scientific purposes. These lists can needlessly alarm patients and physicians or invite dismissal of real medication hazards. We must insist on better understanding about how numbers about harms were collected, where they came from, and what they mean. Table. Improving the Identification and Understanding of Information about Medication-Related Harms

[1]  B. Giraudeau,et al.  Reporting of drug tolerance in randomized clinical trials: when data conflict with authors' conclusions. , 2006, Annals of internal medicine.

[2]  A. Avins,et al.  Brief Communication: Better Ways To Question Patients about Adverse Medical Events , 2006, Annals of Internal Medicine.

[3]  Peter C Gøtzsche,et al.  [Better reporting of harms in randomized trials: an extension of the CONSORT statement]. , 2005, Ugeskrift for laeger.

[4]  Eric J Topol,et al.  Failing the public health--rofecoxib, Merck, and the FDA. , 2004, The New England journal of medicine.

[5]  John P A Ioannidis,et al.  Availability of large-scale evidence on specific harms from systematic reviews of randomized trials. , 2004, The American journal of medicine.

[6]  B. Strom Risk assessment of drugs, biologics and therapeutic devices: present and future issues , 2003, Pharmacoepidemiology and drug safety.

[7]  Jeffrey B. Gross,et al.  Timing of New Black Box Warnings and Withdrawals for Prescription Medications , 2003 .

[8]  C. N. Coleman,et al.  CTCAE v3.0: development of a comprehensive grading system for the adverse effects of cancer treatment. , 2003, Seminars in radiation oncology.

[9]  Robert T. Chen,et al.  The Brighton Collaboration: addressing the need for standardized case definitions of adverse events following immunization (AEFI). , 2002, Vaccine.

[10]  J. Ioannidis,et al.  Completeness of safety reporting in randomized trials: an evaluation of 7 medical areas. , 2001, JAMA.

[11]  S. Shapiro,et al.  Epidemiological assessment of drug-induced disease , 2000, The Lancet.

[12]  P. Gøtzsche,et al.  Non-steroidal anti-inflammatory drugs , 2000, BMJ : British Medical Journal.

[13]  K. Tashima,et al.  Efavirenz plus zidovudine and lamivudine, efavirenz plus indinavir, and indinavir plus zidovudine and lamivudine in the treatment of HIV-1 infection in adults. Study 006 Team. , 1999, The New England journal of medicine.

[14]  M A Fischl,et al.  A controlled trial of two nucleoside analogues plus indinavir in persons with human immunodeficiency virus infection and CD4 cell counts of 200 per cubic millimeter or less. AIDS Clinical Trials Group 320 Study Team. , 1997, The New England journal of medicine.

[15]  R. Day,et al.  Non-steroidal anti-inflammatory drugs , 2019, Reactions Weekly.

[16]  G. Venning Identification of adverse reactions to new drugs. II (continued): How were 18 important adverse reactions discovered and with what delays? , 1983, British medical journal.

[17]  G R Venning,et al.  Identification of adverse reactions to new drugs. II--How were 18 important adverse reactions discovered and with what delays? , 1983, British medical journal.