Alternative Medicine: A Mirror Image for Scientific Reasoning in Conventional Medicine

Discussions about the scientific value of alternative medicine quickly touch the raw nerve of conventional medical reasoning and medical wisdom. As such, alternative medicine is a useful mirror for conventional medicine (the idea that alternative medicine, in particular homeopathy, acts as a forbidden mirror-image for conventional medicine was described by Wiersma [1]): How one looks at the other may reveal more about oneself. Physicians' response to the other may clarify neglected or hidden aspects of the scientific process in conventional medicine. This article examines aspects of the process of weighing scientific evidence in modern medicine. It is not primarily concerned with alternative medicine; rather, we reflect on scientific reasoning within medicine as a whole. Nevertheless, our train of thought in this article was triggered by examining our response to claims of scientific proof of the effectiveness of alternative medicine. We use homeopathy as the main example to discuss the scientific evaluation of alternative medicine because homeopathy has a long and extensive history of evaluation by randomized, controlled trials, and because the debate surrounding homeopathy makes the contradictions between seemingly solid evidence and scientific judgment most clearly visible. Homeopathy and Scientific Evidence of Its Efficacy Homeopathy was devised in Germany by Samuel Hahnemann (17551843). It espouses the belief that whatever symptoms a substance causes in a healthy person, a disease with a similar symptom configuration can be cured by small amounts of the same substance: Similia similibus curentur (like cures like), a principle that is already controversial in itself. Even more controversially, homeopathy claims that the more dilute a substance (if prepared by a series of shakings called succession), the more spiritual vital essence is released and therefore the more potent the medicine that is created: Less becomes more (2). Remedies are often diluted up to or beyond Avogadro's number (1023), with a chance that not a single active molecule is left in the vial (2, 3). Homeopathy has been debated for more than a century and a half. The debate has entered the modern medical era: Randomized trials have been performed and then summarized in meta-analyses. A recent meta-analysis (4), which built on previous ones (3, 5), found 89 trials that were described as adequate. The authors of the meta-analysis conclude that the data are not compatible with the hypothesis that the clinical effects of homeopathy are completely due to placebo (4). The combined odds ratio showed a twofold benefit in favor of homeopathy, even after statistical correction for publication bias. The future of homeopathy now seems bright: A meta-analysis of randomized trials concluded that homeopathic effects can no longer be seen as placebo effects and that the positive reported effects are not due to publication bias. Yet, most physicians working in conventional medicine vehemently dismiss this conclusion and find all kinds of counterarguments: The trials might have been too small; there is only an overall effect, which might be due to the accumulation of several small biases (the position of one of us [6]); a repeatedly proven consistent effect has never been shown for a single indication with a particular regimen (7). In short, we want to find good reasons to discard the randomized trials. Why? What is our ultimate reason for discarding the evidence from the meta-analysis of randomized trials on homeopathy? The authors of the meta-analysis showed that even the very best trials (as judged by the authors' methodologic standards) that used the highest dilutions (approaching or surpassing Avogadro's number) still showed a beneficial effect. That is materially impossible. The highest dilutions in homeopathic medicines are so high that it is not possible to determine by ordinary chemical principles which vial contains an active product and which one placebo (2, 3). Microbiologists know for sure that infinite dilutions of an antibiotic will never show any effect on bacterial growth. No physician will use an antihypertensive medication in a dilution that surpasses Avogadro's number. No oncologist would propose to dilute cytotoxic drugs beyond the limit of chemical detectability. Because of the impossibility of chemical effects, adherents of conventional medicine disbelieve the evidence from the randomized trials on homeopathy. This leads to ever more intricate reanalyses of this meta-analysis. A novel proposition is to apply meta-regression analysis of measures of the quality of randomized trials (8). Through a meta-regression, one tries to estimate the effect of all individual quality elements that matter in particular trials (for example, size and blinding). This approach differs from that of the authors of the meta-analysis on homeopathy, who used an overall quality score. A new meta-regression analysis of the homeopathy trials found that inadequate blinding and small sample size strongly determined the overall positive effect of homeopathy (9). The two largest, adequately blinded trials on homeopathy showed no effect, a finding that is consistent with the intercept of the meta-regression that stood for large blinded trials. However, the authors are quick to point out that their results do not prove that the apparent benefits of homeopathy are due to bias. Nevertheless, those of us who think that the homeopathy results are impossible will see this meta-regression as a strong confirmation of our position. Do Trials Overturn Theory? Once we recognize the tendency not to accept the evidence if it is incompatible with theory and to accept this reasoning as valid, we should analyze it: It might teach us a lot about how we actually reason in conventional medicine. When reflecting on our behavior in several controversies, we recognize that sometimes we accept the evidence from the randomized trial and overturn a theoryhowever beautiful it wasbut that at other times we stick with the theory and dismiss the evidence. Examples of both behaviors can be found in conventional medicine. One of the more fashionable and popular recent theories in immunology and infectious disease medicine concerned the immunologic mechanism of septic shock. Gram-negative septic shock was ascribed to circulating endotoxins produced by the bacteria; endotoxins would elicit a powerful cytokine response that harmed the organism itself. Animal research showed that gram-negative shock could be prevented if the blood was immediately cleared of circulating endotoxin and cytokines. This was done by using antibodies tailor-made by the stock-raising stars of the biotech industry. The first randomized trial, which studied antibodies against endotoxin, was reported as a success (10). However, doubts were expressed upon discovery that the positive findings concerned the subgroup of patients with gram-negative sepsis only. This discovery gave way to a lengthy discussion that is pertinent to our reflection on how we interpret evidence. The problem was that at admission to an intensive care unit, the patient's infection cannot be identified as gram-negative or gram-positive (or as another type of infection), nor can one determine whether the infection led to bacteremia. Thus, in the trial, all patients with clinical suspicion of sepsis were randomly assigned. However, the type of infection was only known after 24 or 72 hours. The analysis was then restricted to patients with gram-negative bacteremia and clinical sepsis; these results showed a clear beneficial effect, supporting the immunologic theory and the animal experiments. Yet, the original report already indicated that the benefit almost disappeared when all randomly assigned patients were considered. The only logical conclusion was that the intervention had more untoward outcomes in the patients without gram-negative sepsis. Regardless of the ensuing discussions (which also frowned on other subgroup analyses [11]), we can imagine that the investigators, as well as the journal's peer reviewers, originally found the restriction justified: Theory predicted that the intervention should work only in patients with gram-negative sepsis. Any untoward outcome in the remaining patients must have appeared to be a freak accident. Subsequently, however, additional trials studied tailor-made antibodies against mediators of sepsis. Another picture emerged: no benefit, and sometimes even a small effect to the contrary (12). Immunologists and infectious disease physicians recognized that the relationship among septic shock, endotoxins, and the cytokine response might have been more complex. It was not just a strong immunologic response to high levels of endotoxin that might be detrimental; the timing of the response might have an effect as well. An initial strong response to endotoxin might be beneficial, while an initial weak response might lead to dissemination of the infection. That dissemination, in turn, might lead to higher levels of endotoxin only at a later stage of the disease. Even some old animal experiments were seen in a new light (13, 14). Thus, in the end, the consistently negative findings of the randomized trials overturned the immunologic theory. The inverse also happens. Recurrent vasovagal syncope is an annoying and sometimes dangerous condition. During the later phases of a syncope, profound bradycardia can develop. It is believed that this bradycardia might augment or prolong the symptoms. Therefore, some physicians have proposed the implantation of demand pacemakers to people with recurrent vasovagal syncope. Such pacemakers would not prevent the onset of syncope but might prevent its full development. A randomized trial of implantation of demand pacemakers had such positive results that the investigators stopped it prematurely (15). However, researchers who had studied the physiology of vasovagal syncope by using a tilt table to eli

[1]  E. Ziegler,et al.  Anti-endotoxin monoclonal antibodies. , 1992, The New England journal of medicine.

[2]  W. Browner,et al.  Are all significant P values created equal? The analogy between diagnostic tests and clinical research. , 1987, JAMA.

[3]  D. Boomsma,et al.  Genetic influence on cytokine production and fatal meningococcal disease , 1997, The Lancet.

[4]  Steven Goodman Toward Evidence-Based Medical Statistics. 2: The Bayes Factor , 1999, Annals of Internal Medicine.

[5]  J. Cornfield Recent methodological contributions to clinical trials. , 1976, American journal of epidemiology.

[6]  J. Concato,et al.  Randomized, controlled trials, observational studies, and the hierarchy of research designs. , 2000, The New England journal of medicine.

[7]  Daniel Barwick,et al.  Manifesto of a passionate moderate , 1999 .

[8]  M. Egger,et al.  The hazards of scoring the quality of clinical trials for meta-analysis. , 1999, JAMA.

[9]  J. Vincent Search for effective immunomodulating strategies against sepsis , 1998, The Lancet.

[10]  M. Langman Homoeopathy trials: reason for good ones but are they warranted? , 1997, The Lancet.

[11]  R. Bone,et al.  Immunologic Dissonance: A Continuing Evolution in Our Understanding of the Systemic Inflammatory Response Syndrome (SIRS) and the Multiple Organ Dysfunction Syndrome (MODS) , 1996, Annals of Internal Medicine.

[12]  Wayne B Jonas,et al.  Are the clinical effects of homoeopathy placebo effects? A meta-analysis of placebo-controlled trials , 1997, The Lancet.

[13]  J. Vandenbroucke,et al.  175th anniversary lecture. Medical journals and the shaping of medical knowledge. , 1998, The Lancet.

[14]  J. Vandenbroucke Homoeopathy trials: going nowhere , 1997, The Lancet.

[15]  L. Hedges,et al.  Are the clinical effects of homeopathy placebo effects? A meta-analysis of placebo-controlled trials. , 1997, Lancet.

[16]  P. Skrabanek DEMARCATION OF THE ABSURD , 1986, The Lancet.

[17]  P. Rochon,et al.  A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. , 1994, Archives of internal medicine.

[18]  J. Kleijnen,et al.  Clinical trials of homoeopathy. , 1991, BMJ.

[19]  T. Bodenheimer,et al.  Uneasy alliance--clinical investigators and the pharmaceutical industry. , 2000, The New England journal of medicine.

[20]  S. Gould,et al.  Deconstructing the "Science Wars" by Reconstructing an Old Mold , 2000, Science.

[21]  A. Hartz,et al.  A comparison of observational studies and randomized, controlled trials. , 2000, The New England journal of medicine.

[22]  E. O'Brién THE LANCET MAKETH THE MAN? Sir Dominic John Corrigan (1802-80) , 1980, The Lancet.

[23]  M. Brignole,et al.  Dual-chamber pacing in the treatment of neurally mediated tilt-positive cardioinhibitory syncope : pacemaker versus no therapy: a multicenter randomized study. The Vasovagal Syncope International Study (VASIS) Investigators. , 2000, Circulation.

[24]  C. Sprung,et al.  Treatment of gram-negative bacteremia and septic shock with HA-1A human monoclonal antibody against endotoxin. A randomized, double-blind, placebo-controlled trial. The HA-1A Sepsis Study Group. , 1991 .

[25]  S. Connolly,et al.  The North American Vasovagal Pacemaker Study (VPS). A randomized trial of permanent cardiac pacing for the prevention of vasovagal syncope. , 1999, Journal of the American College of Cardiology.

[26]  S J Pocock,et al.  Randomized trials or observational tribulations? , 2000, The New England journal of medicine.

[27]  Alan Cantor,et al.  The uncertainty principle and industry-sponsored research , 2000, The Lancet.