Brief Communication: Better Ways To Question Patients about Adverse Medical Events

Context Investigators use diverse methods to assess the adverse events experienced by study participants. Contribution During a 1-month placebo run-in period of a clinical trial, this single-blind substudy randomly assigned 214 men with benign prostatic hyperplasia to 3 groups to test different methods of asking about recent medical problems. Men who completed a checklist about 53 common side effects reported many more problems than participants in the 2 groups that were given different formats of open-ended questions. For example, 77% of the checklist group reported 1 or more medical problems, compared with 13% and 14% of the open-ended groups. Implications Varying the assessment method can cause large differences in reported rates of adverse events. The Editors Currently, there is no standard method for identifying adverse events that occur during a clinical trial. Although regulatory agencies (such as the U.S. Food and Drug Administration) require that studies of new drugs report adverse events in a standard way, they do not specify a standard method for ascertaining these data (1). Consequently, how individual studies identify adverse events varies considerably. For example, early studies of nonsteroidal anti-inflammatory druginduced gastric ulcers reported much lower frequencies of ulcers than more recent studies, mostly because researchers have recently made greater efforts to detect this side effect (2). The implications of this lack of consistent ascertainment methods are substantial; comparisons of rates of reported side effects from 2 or more drugs may not be valid if the methods of collecting adverse events differ. This could impair the ability of patients and physicians to compare the riskbenefit profile of drugs. We therefore conducted a randomized, controlled trial to determine whether different methods of identifying adverse events in a clinical trial would lead to different estimates of the frequency of these events. Methods Study Design The study protocol and all procedures were approved by the Committee on Human Research at the University of California, San Francisco. The study, which took place between April 2002 and April 2005, was a randomized, single-blind, controlled trial that assigned patients to 3 groups to test self-administered methods of assessing medical problems that they experienced while taking a placebo for 1 month. Participants We recruited participants from a larger study that was examining the safety and efficacy of the herb saw palmetto for treatment of benign prostatic hyperplasia (3). The trial, known as the STEP (Saw Palmetto Treatment for Enlarged Prostates) study, required that participants be 50 years of age or older, have moderate to severe symptoms of benign prostatic hyperplasia, and have no serious comorbid illness. All participants in the study gave informed consent; were told that they would be taking placebo at some point during the study; and were assigned to a single-blind, 1-month placebo run-in period. Randomization and Intervention After taking the placebo (referred to as the study medication) for 1 month, patients were randomly assigned to 3 methods of collecting adverse events. All patients were given 1 of 3 self-administered paper forms. The form given to the first group asked an open-ended question: Did you have any significant medical problem since the last study visit? The form given to the second group asked an open-ended question that was more defined: Since the last study visit, have you limited your usual daily activities for more than 1 day because of a medical problem? A checklist accompanied the form given to the third group, which asked a more pointed question: Since the last visit, have you experienced any of the following (checklist attached)? The checklist contained 53 symptoms, grouped by anatomical region. Two of the authors developed the checklist after conducting an unpublished review of checklists that were used in earlier clinical trials performed at the same institution. The checklist did not ask patients to rate the frequency or severity of symptoms and did not ask patients to make a judgment about whether their medical problem was caused by the study medication. Patients in the open-ended question groups who answered yes were asked to identify their medical problem, which was recorded by a study assistant on the same checklist used in the third group. Outcomes and Analysis The primary outcome measure was the difference in the proportion of patients reporting 1 or more adverse events in each group. All patients in the STEP study were included in the current study; therefore, the sample size was not calculated on the basis of the needs of this study. Participants were randomly assigned to the 3 groups in equal proportions by using a computer-generated, random allocation sequence that was prepared before the study began. Study personnel were blinded to the allocation sequence but were aware of group assignments after they were made. Patients were not informed of their group assignment. Persons performing the data analysis were blinded to group assignment. Baseline characteristics of the 3 intervention groups were compared by using analysis of variance for continuous variables and chi-square tests for categorical variables. We also used chi-square tests to compare the number and specific type of adverse events that occurred among groups. All analyses were performed by using Stata, version 8.0 (Stata Corp., College Station, Texas). Role of the Funding Sources The funding organizations had no role in the design and conduct of the study; the collection, management, analysis, and interpretation of the data; or the preparation, review, or approval of the manuscript. Results We randomly assigned 214 patients to 1 of 3 methods of collecting data on adverse effects. Patients were predominantly healthy, well-educated white men (mean age, 63 years) who were taking a mean of 2.5 medications (Table 1). Baseline characteristics of the patients were similar among the 3 groups. All patients completed the study and the outcome assessment (Figure). Table 1. Baseline Characteristics of Study Participants Figure. Flow diagram showing the distribution of participants at each stage of the study. The group that was assigned to a checklist method reported a significantly greater number of adverse events (238 events) than the first and second groups, which were asked open-ended questions (11 and 14 adverse events, respectively; P< 0.001) (Table 2). A much higher percentage of patients in the checklist group reported 1 or more adverse events (77%) compared with the patients asked each of the 2 different open-ended questions (14% for the first group and 13% for the second group; P< 0.001). For each of the 10 most commonly reported adverse events (Table 2), participants in the checklist group reported a greater number of adverse events (P< 0.001). No serious adverse events occurred during the study period. Table 2. 10 Most Frequently Reported Adverse Events Discussion In this randomized, controlled trial, we found that a checklist method of identifying adverse events dramatically increased the number of reported events compared with 2 types of open-ended questions. Although this finding is intuitive, the magnitude of effect has important implications both for the conduct of clinical trials and for assessment of the riskbenefit profile of drugs and other interventions. It is common practice for physicians and patients to select drugs and other interventions on the basis of their reported rate of side effects. However, if different drugs used for the same indication are examined in clinical trials that use different methods of identifying adverse events, then it is not valid to compare the reported rate of side effects. For example, the reported rates of sexual side effects from selective serotonin reuptake inhibitors range from 2% to 73%; much of this difference is probably attributable to different methods of adverse event collection (4). Similarly, a recent systematic review found that published trials of pharmacologic treatments for rheumatoid arthritis were much more likely to report data on harm than trials of nonpharmacologic treatment (5), highlighting the difficulty of comparing the safety of different treatments for the same condition. The 3 self-administered questions that we used to assess the frequency of adverse events in this study were, by design, limited in scope. The self-administered forms did not ask patients to describe the timing, severity, or frequency of their medical problems, nor did they ask participants or investigators to make a judgment of causality. Other techniques to assess adverse events, such as changes in vital signs, laboratory tests, physical examinations, or more detailed searches for expected adverse events, were not included. The purpose of this simplified approach was to isolate and contrast 3 different initial screening methods of identifying medical problems occurring among participants in a clinical trial. Because all patients in the current study were taking placebo, probably none of the reported adverse events were true side effects of the study medication but were merely symptoms that commonly occur in adults. For example, a previous survey of university students and hospital staff found that 81% of respondents reported some symptom within the past 3 days when prompted by a checklist (6). This highlights the problem that most study participants are likely to have a high prevalence of symptoms that are unrelated to a study drug or intervention, and a checklist method is therefore likely to have very low specificity for detecting true side effects. The wording of the 3 self-administered questions that we used in this study asked about 3 different thresholds of medical problems. One question asked participants if they experienced a significant medical problem, one asked if they limited their usual daily activities fo

[1]  I. Boutron,et al.  Reporting of Harm in Randomized, Controlled Trials of Nonpharmacologic Treatment for Rheumatic Disease , 2005, Annals of Internal Medicine.

[2]  J. Ioannidis,et al.  Better Reporting of Harms in Randomized Trials: An Extension of the CONSORT Statement , 2004, Annals of Internal Medicine.

[3]  D. Safer DESIGN AND REPORTING MODIFICATIONS IN INDUSTRY-SPONSORED COMPARATIVE PSYCHOPHARMACOLOGY TRIALS , 2002, The Journal of nervous and mental disease.

[4]  D. Moher,et al.  The Revised CONSORT Statement for Reporting Randomized Trials: Explanation and Elaboration , 2001, Annals of Internal Medicine.

[5]  P. Gøtzsche,et al.  Non-steroidal anti-inflammatory drugs , 2000, BMJ : British Medical Journal.

[6]  Y. Lapierre Evaluation des effets secondaires chez les nevrotiques-un essai avec la mesoridazine et le placebo. , 1975, Canadian Psychiatric Association journal.

[7]  Lapierre Yd Evaluation des effets secondaires chez les nevrotiques-un essai avec la mesoridazine et le placebo. , 1975 .

[8]  K. Rickels,et al.  Side reactions in neurotics. I. A comparison of two methods of assessment. , 1970, The Journal of clinical pharmacology and the journal of new drugs.

[9]  D. Lowenthal,et al.  Adverse nondrug reactions. , 1968, The New England journal of medicine.

[10]  D. Patey Controls in Clinical Research , 1954 .

[11]  I. Wiklund,et al.  Evaluation of three methods of symptom reporting in a clinical trial of felodipine , 2004, European Journal of Clinical Pharmacology.

[12]  J. Sjövall,et al.  Detection of adverse drug reactions in a clinical trial using two types of questioning. , 1981, Clinical therapeutics.