Join the dialogue on health care reform. Comment on the perspectives published in Annals and offer ideas of your own. All thoughtful voices should be heard. On 30 June 2009, another milestone was achieved in progress toward a national system to promote medical research that focuses on decision making by physicians and patients. The first steps occurred in late 2007 and early 2008 with a seminal article (1) and an Institute of Medicine (IOM) report that called for a national initiative of research that would support better decision making about interventions in health care (2). One of us was a coeditor of the IOM report. A third milestone was reached when both presidential candidates endorsed this concept. A fourth milestone came when the president signed into law the American Recovery and Reinvestment Act of 2009 (ARRA); this act allotted $1.1 billion to support this form of research, which had become known as comparative effectiveness research (CER). The legislation created a federal council on CER and asked the IOM to elicit input from a broad array of stakeholders on which research topics should have the highest-priority for funding through the ARRA and to then develop a list of the highest-priority topics for the Secretary of Health and Human Services to consider. By law, the Federal Coordinating Council and the IOM committee to set priorities for comparative effectiveness research reported to the Secretary on 30 June 2009. This issue contains both this article, which is a commentary on the IOM committee report (3) by the committee's cochairs, and a perspective on better research methods for CER (4). Definition of CER The IOM committee quickly settled on a working definition of CER, which consisted of the elements of earlier definitions reduced to 2 sentences: CER is the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat and monitor a clinical condition, or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels. Two key elements that are embedded in this definition are the direct comparison of effective interventions and their study in patients who are typical of day-to-day clinical care. These features would ensure that the research would provide information that decision makers need to know, as would a third feature, research designed to identify the clinical characteristics that predict which intervention would be most successful in an individual patient. The same research design would also help policymakers by identifying subpopulations of patients that are more likely to benefit from one intervention than the other. High-Priority Research Topics The IOM committee sought advice from a broad range of stakeholders and received it in 3 forms: approximately 90 letters; 54 oral presentations at a day-long hearing in Washington, DC; and a Web-based nomination form. Overall, the committee received more than 2606 nominations from 1758 individual responders within 3 weeks. In a 3-step voting process, the committee identified 100 high-priority topics, which it ranked in quartiles. At each stage, the committee sought to have a balanced portfolio of topics. Table 5-1 in the report (available at www.iom.edu/cerpriorities) lists the 100 highest-ranked topics. Establishing a Sustainable National Program of CER The U.S. Congress is currently debating legislation that would guarantee health insurance to more Americans while reforming the U.S. health system. A key aim of the legislation is to moderate the rate at which the cost of health care increases while improving the quality of care. Many leaders believe that reaching this goal will require a sustained effort to produce better evidence to inform decision making. Government officials have characterized the $1.1 billion allocated to CER as a down payment on a national program of CER. As of this writing, the House of Representatives bill proposes a national program of CER, and the Senate Finance Committee white paper does likewise. Accordingly, the IOM committee made several recommendations aimed at a sustainable, trustworthy national CER initiative. The Appendix provides the full text of these recommendations. Governance The 2008 IOM committee recommended establishing a national program with authority, overarching responsibility, sustained resources, and adequate capacity to ensure production of credible, unbiased information about what is known and what is not known about clinical effectiveness (2). The current committee expanded this recommendation by calling upon the secretary of the Department of Health and Human Services to establish a mechanismsuch as a coordinating advisory bodywith the authority to strategize, organize, monitor, evaluate, and report on the implementation and impact of the CER Program. The committee recognized that many agencies of the federal government are engaged in research that fits some or all of the key elements of CER. To make the best use of the funds allocated for CER, these agencies should be jointly accountable for spending CER funds on research that reflects the same conceptualization of CER. Therefore, these disparate efforts should use a single definition of CER and the same definitions of outcomes and measures of function and illness. This form of cross-agency coordination is compatible with each agency focusing on the form of research with which it has the greatest experience. Active Involvement of Consumers in CER The 2009 ARRA created a means for coordinating federal CER efforts: the Federal Coordinating Council. Seeking to complement the functions of the Council, the IOM committee focused on the potential contribution of consumers of health care to CER and recommended involving them at all levels of the national CER initiative. The recommendation is as follows: The CER Program should fully involve consumers, patients and their caregivers in key aspects of CER, including strategic planning, priority setting, research proposal development, peer review, and dissemination. The concept that animates this recommendation is a radical but logical departure from past practice: When a principal aim of research is to inform decision makers, listen to the decision makers. Meaningful participation will require consumers to learn more about research, just as they will teach researchers about how to frame their investigations to satisfy the needs of consumers who are also decision makers. This approach has precedent. For example, the National Institute for Health and Clinical Effectiveness in the United Kingdom teaches decision making to consumers who participate in the institute's program. Methodology of CER Remembering that the purpose of CER is to inform decision making, consider the principal forms of clinical research. Systematic reviews of the literature summarize a body of evidence. Over the past 2 decades, they have become indispensable to expert panels that formulate practice guidelines or policymakers who make insurance coverage decisions. These macro decision makers are starting to use decision models for the same purposes (5). Two broad categories of research are the starting point for systematic reviews: observational research and randomized trials. Large established databases provide an opportunity to link current health care practices to the outcome of care. These databases are representative of critical groups often omitted from randomized trials, but they contain limited data; of note, they seldom specify the rationale for medical decisions. These observational research methods have many advantagesspeed, real-world decisions, large numbers of decisions and outcomes, and low costbut they cannot escape a key limitation: Characteristics of the patient that drive real-life clinical decisions may also influence clinical outcomes, leading to uncertainty about whether they or the intervention itself causes the outcomes. Overcoming the limitations of observational research is the most important frontier of research on study methods. To overcome these limitations, researchers often randomly assign patients to different interventions. This simple action eliminates much of the uncertainty that plagues the interpretation of observational research. However, we have too often used the conceptual elegance of randomization to answer the wrong questions. We ask, Does this work? when our readers want to know, Is this better than that? Thus, too often, randomized trials are so-called efficacy studies: They are designed to create near-ideal circumstances to see whether the intervention can possibly work. To this end, researchers exclude many types of patients from efficacy trials and often compare the intervention with a placebo. In these respects, efficacy trials are unlike the conditions that physicians and patients face in daily practice. Another article in this issue (4) describes several exciting new methods for making randomized trials more suitable for CER. The examples of observational research and new randomized trial methods are the basis for the IOM committee's strong recommendation that a national CER program should support research in the methods of clinical research. The Infrastructure for Observational Research The committee believes that very large collections of the electronic records of patients can be a valuable resource for CER. Using these data sets, it is possible to compare the outcomes of using several effective interventions in a population that is representative of daily care, 2 features that are aligned with the goals of CER. The great numbers of patients in these data sets also makes it possible to study subgroups with precision and perhaps identify key predictors of response to an intervention, both of which would facilitate decision making at the individual and population leve
[1]
R. Gottlieb,et al.
The New Yorker
,
1987
.
[2]
G. Wilensky.
Developing a center for comparative effectiveness information.
,
2006,
Health affairs.
[3]
Amy B. Knudsen,et al.
Evaluating Test Strategies for Colorectal Cancer Screening: A Decision Analysis for the U.S. Preventive Services Task Force
,
2008,
Annals of Internal Medicine.
[4]
Haydn Bush,et al.
The cost conundrum.
,
2008,
Hospitals & health networks.
[5]
B. McNeil,et al.
Knowing What Works in Health Care: A Roadmap for the Nation
,
2008
.
[6]
Bryan R Luce,et al.
Rethinking Randomized Clinical Trials for Comparative Effectiveness Research: The Need for Transformational Change
,
2009,
Annals of Internal Medicine.
[7]
Nicolas Ulmer,et al.
The Cost Conundrum
,
2010
.