Systematic Reviews: Critical Links in the Great Chain of Evidence

Successful clinical decisions, like most human decisions, are complex creatures [1]. In making them, we draw on information from many sources: primary data and patient preferences, our own clinical and personal experience, external rules and constraints, and scientific evidence (Figure 1). The mix of inputs to clinical decisions varies from moment to moment and from day to day, depending on the decision and the decision makers. In general, however, the proportion of scientific evidence in the mix has grown progressively over the past 150 years or so. Figure 1. Factors that enter into clinical decisions. One major reason why the mix has changed is simply the explosive increase in the amount and quality of the scientific evidence that has come from both the laboratory bench and the bedside. The maelstrom of change wrought by the molecular biology revolution has been matched at the clinical level by a tidal wave of increasingly sophisticated clinical trials. It is estimated that since the results of the first randomized clinical trials in medicine were published in the 1940s [2], roughly 100 000 randomized and controlled clinical trials have appeared in print [3], and the results of many well-conducted, completed trials remain unpublished [4]. A second reason for the growing emphasis on scientific evidence is the increasing expectation, from both within and outside of the medical profession, that physicians will produce and use the evidence in delivering care. The future holds the promise of continued expansion of the body of research information. However, it also holds the parallel threat of increasingly inadequate time and resources with which to find, evaluate, and incorporate new research knowledge into everyday clinical decision making. Fortunately, mechanisms are emerging that will help us acquire the best, most compelling, and most current research evidence. Particularly promising in this regard is the use of systematic reviews. Systematic reviews are concise summaries of the best available evidence that address sharply defined clinical questions [5, 6]. Of course, the concept of reviews in medicine is not new. Preparation of reviews has traditionally depended on implicit, idiosyncratic methods of data collection and interpretation. In contrast, systematic reviews use explicit and rigorous methods to identify, critically appraise, and synthesize relevant studies. As their name implies, systematic reviews-not satisfied with finding part of the truth-look for the whole truth. That is, they seek to assemble and examine all of the available high-quality evidence that bears on the clinical question at hand. Although it looks easy from the outside, producing a high-quality systematic review is extremely demanding. The realization of how difficult the task is should be reassuring to all of us who have been frustrated by our seeming inability to stay informed and up to date by combing through the literature ourselves. The concepts and techniques involved, including that of meta-analysis, are at least as subtle and complex as many of those currently used in molecular biology. In this connection, it is important to understand that a systematic review and a meta-analysis are not one and the same. Meta-analysis is a specific methodologic and statistical technique for combining quantitative data. As such, it is simply one of the tools-albeit a particularly important one-that is used in preparing systematic reviews. Although many of the techniques involved in creating a systematic review have been widely available for some time, the techniques for generating clinical recommendations that consider baseline risk, cost, and the totality of the evidence available from a systematic review constitute a relatively new area of research that requires dealing with a range of critical yet abstract issues, such as ambiguity, context, and confidence. Many articles describing the conceptual basis of systematic reviews have been published during the past decade [7], but detailed, how-to information on preparing, understanding, and using systematic reviews has been scattered and incomplete. The forthcoming series of articles on systematic reviews that begins with the paper by Cook and colleagues in this issue [8] has been designed to collate and update that information. Cook and colleagues describe systematic reviews in detail, discuss their strengths and limitations, and explain how they differ from traditional, narrative reviews. The remainder of the papers in the series are divided into two categories: using systematic reviews in practice and conducting reviews. These articles are primarily broad narrative overviews. In preparing them, their authors have drawn on widely varying sources, including electronic searches of the published literature, reference lists, the Cochrane Library [3], personal files, colleagues, and personal experience. Most of the articles are directed toward practitioners who wish to learn more about what systematic reviews are and how to use them. A few are directed primarily toward specific audiences, such as physician-educators. And we hope that the last articles in the series will entice some readers to join the growing number of groups that are doing the hard but intensely rewarding work of preparing systematic reviews. Some of the articles inevitably delve into technical and seemingly arcane methodologic topics, but we make no apologies for this. Medicine at all levels is technical, and pushing the envelope inevitably involves moving out into unfamiliar and sometimes uncomfortable territory. Perhaps more important, however, is that many aspects of the systematic review process will be familiar to clinicians because these techniques are similar to the ones they use every day: collecting, filtering, synthesizing, and applying information. How can the full potential of the knowledge contained in systematic reviews be realized in clinical practice? There is no simple answer, but the following would help. First, developers of electronic databases must, at the very least, pioneer improved-that is, more transparent and clinically meaningful-approaches to searching, thereby giving physicians rapid, sensitive, and specific access to multiple data sources. Second, we need many more systematic reviews that address the natural history and diagnosis of disease and the benefits and potential harms of health care interventions. Third, we need to champion the production of new, well-designed, high-quality research that evaluates important patient outcomes-the raw material of systematic reviews that is a crucial part of clinical decision making. And, finally, both physicians and the health care systems in which we work need to fully embrace and tangibly support lifelong learning as an essential element in the practice of good medicine. A recent related development is an international movement to improve the reporting of clinical research, particularly the results of randomized, controlled trials [9] and meta-analyses [10]. These efforts focus on clear, comprehensive communication of the methods and results of clinically relevant research through the development and application of reporting standards that are being suggested by editors, researchers, methodologists, and consumers. These standards should allow readers to better appraise, interpret, and apply the information in published reports of research in their own practices and situations. Perhaps equally important is the possibility that these standards will create a positive ripple effect, starting at the earliest stages of research planning and extending through the conduct of clinical trials. Exciting new information pouring out of the molecular biology revolution has the potential to transform medicine. But even this enormously powerful information will be of little use to physicians and their patients unless 1) the diagnostic and therapeutic interventions that flow from it are stringently tested in clinical trials and 2) the results of those trials are synthesized and made accessible to practitioners. Systematic reviews are thus a vital link in the great chain of evidence that stretches from the laboratory bench to the bedside. From this perspective, the awesome task of extracting the knowledge already encoded in the tens of thousands of high-quality clinical studies, published and unpublished, is arguably every bit as important to our health and well-being as the molecular biology enterprise itself. The task can only grow in size and importance as more and better trials are conducted; indeed, the task has already been likened in scope and importance to the Human Genome Project [11]. It is our earnest hope that these articles on systematic reviews will play a useful part in strengthening the chain of evidence that links research to practice. Dr. Cook: Department of Medicine, St. Joseph's Hospital, 50 Charlton Avenue East, Hamilton, Ontario L8N 4A6, Canada. Dr. Davidoff: Annals of Internal Medicine, American College of Physicians, Independence Mall West, Sixth Street at Race, Philadelphia, PA 19106.