The Quality of Drug Studies Published in Symposium Proceedings

For physicians, pharmacists, pharmacologists, and others, the medical literature is a key source of information about prescription drugs [1, 2]. The medical literature on drugs includes articles from peer-reviewed journals, non-peer-reviewed (controlled circulation or throwaway) journals, and the published proceedings of symposia [3, 4]. Symposia are a rapidly growing and potentially major means of disseminating information about drugs. In the clinical journals with the highest circulation rates, the number of symposia published increased from 83 during 1972-1977 to 307 during 1984-1989. Approximately half of these symposia were on pharmaceutical topics [4]. Symposia can be valuable sources of information about drugs, but evidence suggests that they can also be used to market drugs and other interventions, especially if they are industry sponsored. Approximately 70% of symposia on pharmaceutical topics are sponsored by drug companies [3, 4]. Among symposia, sponsorship by a single drug company is associated with promotional characteristics that include a focus on a single drug, misleading titles, use of brand names, and lack of peer review [4]. Other studies indicate that clinical trials, including those published in symposia, are more likely to favor a new drug therapy if they are funded by the pharmaceutical industry than if they are not [5, 6]. Although physicians often report that the peer-reviewed literature is one of their main sources of drug information, industry sources of information can sometimes have a stronger influence on prescribing behavior [2]. Thus, if symposia sponsored by drug companies are a growing source of information about drugs for pharmacists and physicians, assessing the quality of the articles in these symposia is important. We compared the methodologic quality and relevance of drug studies published in symposia sponsored by single drug companies with those of studies that were published in symposia that had other sponsors or in the peer-reviewed parent journals. We also assessed whether a methods section was present, because such a section is necessary for evaluating quality. Finally, we tested whether drug industry support of research was associated with study outcome. Methods A symposium is a collection of papers published as a separate issue or as a special section in a regular issue of a medical journal [4]. We defined original clinical drug articles as articles that 1) appeared to present original data from studies done in humans [that is, articles that had at least one table or figure that was not acknowledged to have been reprinted from another source] and 2) did not specifically state that they were reviews [4]. Selection of Articles We identified original clinical drug articles that had a section describing the study methods, because such a section is needed to assess the quality of an article. Using a computer-generated list of random numbers from 1 to 625, we randomly selected symposia from 625 symposia that had been identified for a previous study [4]. We had data on the type of sponsorship of publication for each symposium. From each selected symposium, we randomly selected one original clinical drug article that had a methods section. We continued selecting symposia until we had enough articles (n = 127) according to the sample size estimates described below. We also calculated the proportion of articles in the selected symposia, overall and by type of sponsorship, that had methods sections. Quality Assessment We compared the quality of original clinical drug articles published in symposia sponsored by single drug companies with that of similar articles published in symposia that had other sponsors and in the peer-reviewed parent journals. Sample Size Estimates We estimated the sample size needed to test the association between the independent variable type of sponsorship of publication and the main outcome measure, methodologic quality score. For a three-group comparison, a minimum sample of 108 symposium articles was needed to detect a minimum effect size of 0.10 (on a scale of 0 to 1), with an value of 0.05 and a value of 0.80, and standard deviation of quality scores of 0.18 based on previous results [7]. To compare articles from symposia sponsored by single pharmaceutical companies with articles from the peer-reviewed parent journals, we estimated that we would need 45 symposia articles and 45 journal articles; this estimate was the result of sample size calculations done using the variables described above. Because date of publication, journal, and therapeutic class of drug could have confounded the association between source of publication and quality [8-10], we matched each symposium article to an article from the parent journal by using these characteristics, as described previously [7]. Our sample of symposium articles contained 50 articles sponsored by single drug companies, but 5 articles published in Transplantation Proceedings were excluded from this analysis because no parent journal is associated with that publication. Instruments We used previously developed instruments to measure the methodologic quality of articles (defined as the minimization of systematic bias and the consistency of conclusions with results) and nonmethodologic indices of quality, such as clinical relevance and generalizability. Both instruments were valid and reliable and have been published elsewhere [7]. Four reviewers independently assessed each article: Two used the methodologic quality instrument, and two used the clinical relevance instrument. We derived methodologic quality and clinical relevance scores for each article by using a previously described scoring system [7]. Each score was between 0 (lowest quality) and 1 (highest quality) and was the average of the scores of the two reviewers. Two clinical pharmacologists with extensive research experience in the health sciences did the methodologic quality assessment. For the clinical relevance instrument, three pairs of reviewers with clinical experience in general internal medicine and research experience in the health sciences each assessed one third of the articles. Each pair of reviewers reviewed the articles in the same randomized order. For both instruments, reviewers were trained as described previously [7]. For the quality assessments, each reviewer worked independently, was blinded as to whether an article had been published in a symposium, and was given photocopies of articles from which author names, institution names, journal names, dates, and all other reference information had been obliterated. Reviewers were unaware of our hypotheses and the purpose of their reviewing, and they were paid for their work. None of the reviewers were known to us or knew of our previous work before the study. We assessed the inter-rater reliability of quality scores by using the Kendall coefficient of concordance (W) with adjustment for tied ranks [11] and the intraclass correlation (R; treating both reviewers and articles as random effects [12]). Inter-rater reliability of quality scores was high (for methodologic quality scores: W equals 0.85, R equals 0.74 [95% CI, 0.67 to 0.80]; for clinical relevance scores: W equals 0.77, R equals 0.56 [CI, 0.44 to 0.65]). Drug Company Support and Study Outcome For each article, one of us determined whether a drug company had supported the research and whether the article 1) reported an outcome favorable to the drug of interest, 2) did not report an outcome favorable to the drug of interest, or 3) did not test a hypothesis. The drug of interest (as defined from the perspective of the authors, according to Gotzsche [13]) was the newest drug if two or more drugs were studied. We defined research as having had drug company support if the article that reported the research acknowledged either that a drug company had provided funding or drugs or that any of the authors were employed by a drug company. We determined drug company support solely on the basis of information in the paper. If an article did not test a hypothesis, it was excluded from this analysis. We classified the remaining articles as favorable or unfavorable using Gotzsche's definitions [13]. An article was favorable if the drug that seemed to be of primary interest to the authors had the same effect as the comparison drug or drugs but with less pronounced side effects, had a better effect without more pronounced side effects, or was preferred more often by patients when the effect and side-effect evaluations were combined. All other articles were considered not favorable. The conclusions of the authors were taken at face value, even if they conflicted with the study results. To test inter-rater reliability, the other author independently assessed a subset of the articles (n = 90). Agreement in classifying articles as favorable or not favorable was 85%. Statistical Analyses Because methodologic quality and relevance scores were distributed normally (Shapiro-Wilk test), we analyzed differences between groups (type of sponsorship of publication) by using parametric one-way analysis of variance followed by the Tukey test for multiple comparisons or two-way analysis of variance (total error rate, 0.05). We compared matched groups (symposium articles and peer-reviewed parent journal articles) by using the paired t-test (two-tailed equals 0.05). To analyze categorical data on the outcome of studies, we tested for differences in proportions between groups by using the chi-square statistic. For tests of significance, we used an value of 0.05. All hypothesis tests were two-sided. Results Presence of a Method Section To obtain 127 original clinical drug articles for quality assessment, we had to select 213 symposia containing a total of 5041 articles. The proportions of articles that reported original data but contained no methods sections were 4% overall (195 of 5041), 10% (108 of 1064) in the symposia sponsored by single drug companies,

[1]  P. Rochon,et al.  A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. , 1994, Archives of internal medicine.

[2]  D. Rennie,et al.  Throw it away, Sam: the controlled circulation journals. , 1990, AJR. American journal of roentgenology.

[3]  P. Dieppe,et al.  Is research into the treatment of osteoarthritis with non-steroidal anti-inflammatory drugs misdirected? , 1993, The Lancet.

[4]  E. Hemminki Quality of Clinical Trials — A Concern of Three Decades , 1982, Methods of Information in Medicine.

[5]  K. Dickersin,et al.  Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. , 1992, JAMA.

[6]  D. Rennie,et al.  The publication of sponsored symposiums in medical journals. , 1992, The New England journal of medicine.

[7]  P. Easterbrook,et al.  Publication bias in clinical research , 1991, The Lancet.

[8]  E. Hemminki Study of information submitted by drug companies to licensing authorities. , 1980, British medical journal.

[9]  P R Manning,et al.  How internists learned about cimetidine. , 1980, Annals of internal medicine.

[10]  K. Dickersin,et al.  Publication bias and clinical trials. , 1987, Controlled clinical trials.

[11]  P. Rochon,et al.  Evaluating the quality of articles published in journal supplements compared with the quality of those published in the parent journal. , 1994, JAMA.

[12]  Bernice W. Polemis Nonparametric Statistics for the Behavioral Sciences , 1959 .

[13]  M. Cho,et al.  Instruments for assessing the quality of drug studies published in the medical literature. , 1994, JAMA.

[14]  E. A. Haggard Intraclass correlation and the analysis of variance. , 1960 .

[15]  R. Simes Publication bias: the case for an international registry of clinical trials. , 1986, Journal of clinical oncology : official journal of the American Society of Clinical Oncology.

[16]  F. Mosteller,et al.  Reporting on methods in clinical trials. , 2019, The New England journal of medicine.

[17]  P. Gøtzsche Reference bias in reports of drug trials. , 1987, British medical journal.

[18]  R. Davidson Source of funding and outcome of clinical trials , 1986, Journal of general internal medicine.

[19]  R Hartley,et al.  Scientific versus commercial sources of influence on the prescribing behavior of physicians. , 1982, The American journal of medicine.

[20]  P. Gøtzsche Meta-analysis of NSAIDs: contribution of drugs, doses, trial designs, and meta-analytic techniques. , 1993, Scandinavian journal of rheumatology.

[21]  F. Mosteller,et al.  How study design affects outcomes in comparisons of therapy. I: Medical. , 1989, Statistics in medicine.

[22]  E Andrew,et al.  Publications on clinical trials with X-ray contrast media: differences in quality between journals and decades. , 1990, European journal of radiology.