Better Information for Better Health Care: The Evidence-based Practice Center Program and the Agency for Healthcare Research and Quality

In 1989, the U.S. Congress established the Agency for Health Care Policy and Research (now the Agency for Healthcare Research and Quality [AHRQ]) with the ambitious mission to improve the quality, safety, efficiency, and effectiveness of health care for all Americans (www.ahrq.gov/about/budgtix.htm). Motivated by research showing that practice varied widely across the United States, lawmakers hoped that AHRQ could improve quality and efficiency by clarifying the evidence about what works and what doesn't work in health care (1, 2). Improving quality and reducing costs have proven to be formidable challenges, reflecting systemic problems in the health care system that will not be solved by evidence alone. Nonetheless, a 2001 Institute of Medicine report on how to address this quality chasm identified evidence-based decision making as one of the core components of delivering safe, effective, efficient, and patient-centered care (3, 4). Recent examples of high-dose chemotherapy for breast cancer, postmenopausal hormone therapy, and cyclooxygenase-2 inhibitors have starkly illustrated the dangers to patients and the system at large when health care decisions are driven by advocacy and marketing rather than a balanced examination of the evidence. The Agency for Healthcare Research and Quality has pursued its commitment to evidence-based practice through research and programs that have produced important new information about the benefits, risks, costs, and cost-effectiveness of specific treatments and technologies for common and costly conditions such as back pain, prostate disease, and pneumonia (5-7). Perhaps the most visible commitment to evidence-based practice was AHRQ's support of a clinical practice guideline program that produced 19 guidelines from 1990 to 1996 (www.ahrq.gov/clinic/cpgonline.htm) on important conditions such as heart failure and otitis media (8). A program initially requested by Congress, the guidelines proved popular with many stakeholders, and their explicit methods helped elevate the standards for evidence-based guidelines. At the same time, they also revealed limitations of centrally developed, government-sponsored guidelines. The process of convening individual panels, commissioning background reports, and developing guidelines was slow and expensive. Although panel members were all independent nongovernment experts, critics assumed that the primary purpose of the guidelines was to help the government save money by discouraging expensive procedures. Bringing primary care clinicians, specialists, nurses, and methodologists together was an important advance but increased the perceived threat to some specialists. More important, no single guideline could anticipate all the considerations that influence clinical practice recommendations. Organizations that implemented AHRQ guidelines often modified them to address local issues and facilitate adoption by their physicians (9). The greatest challenge to AHRQ's guideline program occurred when several guidelines became lightning rods amidst a larger movement to shrink government's role in clinical policy. Although AHRQ narrowly escaped an attempt in 1995 to eliminate its funding (10), it was already redesigning its guideline program to address the barriers to getting evidence into practice. In 1997, the Agency began preparations for the National Guideline Clearinghouse, launched the first of a series of research programs on how to translate research into practice, and announced that it would cease producing guidelines while increasing the production of evidence through a new Evidence-based Practice Center (EPC) program. Establishing the EPCs The EPC program was designed to provide the best available evidence to decision makers while increasing the roles for a wide variety of stakeholders. The Agency selected 12 centers across North America to develop systematic reviews of the evidence on important health care questions. Questions to be addressed would be nominated by professional societies, health plans, insurers, federal and state agencies, and other private and public groups, who would now assume the responsibility of using the evidence to improve practicethrough guidelines, quality initiatives, coverage decisions, research programs, advocacy, and other activities. The initial EPCs were chosen on the basis of their broad expertise in research methods and systematic reviews. They consisted of academic centers, including the 4 North American Cochrane Centers at that time, and private nonprofit research organizations. In June 2002, AHRQ awarded new 5-year contracts to 13 centers, including renewed contracts for 10 of the original EPCs (Table 1). Table 1. Evidence-based Practice Centers EPCsProcess The systematic reviews produced by the EPCs are intended to inform a wide range of health care decisions. Of the current 13 EPCs, 3 conduct technology assessments for the Centers for Medicare & Medicaid Services to inform coverage decisions for new technologies. Another center is dedicated to supporting the work of the U.S. Preventive Services Task Force, and the remaining 9 generalist EPCs conduct reviews on a more diverse range of topics nominated by outside partners. In addition, various federal agencies fund systematic reviews through the 13 EPCs to support consensus conferences, research planning, policy initiatives, and other programs. Table 2 lists the reports released in 2004, along with their partners; the AHRQ Web site lists all 119 reports released to date (www.ahrq.gov/clinic/epcix.htm). Table 2. Reports Released by the Evidence-based Practice Center Program in 2004 Nominating organizations describe why the issue is important, what preliminary questions should be addressed, and how they plan to implement the findings of an evidence report. The Agency prioritizes nominated topics according to clinical and economic burden of the condition; controversy over existing evidence; relevance to AHRQ priority populations and federal health care programs; and the potential for the review to change practice, including the partner's plan for using and evaluating the impact of the report. An EPC Coordinating Center (run by the Lewin Group, a health care and human services consulting firm) conducts an abbreviated literature search to ensure that evidence for a systematic review is sufficient; surveys existing guidelines and reviews and consults with experts to determine which key questions are worth addressing; assesses areas of controversy or practice variation; and scans for important ongoing studies that may justify deferring a review. Once a topic's key questions are developed and the work is assigned, the EPC convenes a panel of 5 to 8 content experts nominated by the EPC, AHRQ, and the partner or partners. The panel helps refine and prioritize questions, provides advice on which types of studies to include or exclude, and suggests other analyses that may be useful. The EPCs conduct a comprehensive, structured search of MEDLINE, EMBASE, and additional databases as appropriate to the topic. Articles are selected according to the predefined inclusion and exclusion criteria, the reviewers rate the quality of individual studies, and results are summarized by using quantitative (that is, meta-analytic) or qualitative methods as deemed appropriate. Draft reviews are circulated widely for peer review to content experts, representatives of relevant specialty organizations, federal agencies, and AHRQ scientific staff. A report detailing the disposition of reviewers' comments is submitted to AHRQ with the final report. The timelines for EPC reports are challenging. We usually request final reports within 12 months from the time of topic assignment. Reports are released on the AHRQ Web site and are made available in print by request. The Role of the EPC Program in Evidence-Based Medicine The EPC program exists within a growing international array of programs that develop evidence-based information to help guide policy and practice. Many EPC researchers are active in the Cochrane Collaboration (www.cochrane.org), and EPCs regularly search the Cochrane Controlled Trials Register to identify relevant trials. Where Cochrane or other high-quality reviews have addressed a question of interest, reports will incorporate or update that information and focus on areas where the research has not been synthesized. Several EPC reports have involved formal collaborations with Cochrane and the Health Technology Assessment program of the United Kingdom National Health Service (11). The Agency is an active member of INAHTA, the International Network of Agencies for Health Technology Assessment (www.inahta.org), and reports draw on work of other INAHTA organizations where appropriate. In addition to being available through the National Library of Medicine Bookshelf, AHRQ EPC reports are included in University of York's Centre for Reviews and Dissemination databases (www.york.ac.uk/inst/crd). Despite similarities, the AHRQ EPC program differs in important ways from these programs. Although supported by government, many of our partners are private organizations. As the primary funder, AHRQ exerts more centralized control to ensure that reports address the needs of users, in contrast to a more decentralized bottom-up approach in the Cochrane Collaboration. The EPC reports summarize the evidence but do not translate the findings into specific clinical recommendations or guidance, as done by the United Kingdom National Institute for Health and Clinical Excellence. The range of evidence considered by EPC reports is also unique, necessitated by the diversity of clinical and policy questions they have been asked to address, from bioterrorism training to health literacy (12). Methods Research An important goal of the EPC program is to advance the methods for conducting and reporting systematic reviews. In response to a request from Congress, the Research Triangle Institute/University of

[1]  R. Duncan,et al.  Diagnosis and Management of Acute Otitis Media , 2019, Jurnal Penelitian Perawat Profesional.

[2]  S. Datta,et al.  Setting the Target for a Better Cervical Screening Test: Characteristics of a Cost‐Effective Test for Cervical Neoplasia Screening , 2000, Journal of lower genital tract disease.

[3]  G. Guyatt,et al.  Grading quality of evidence and strength of recommendations , 2004, BMJ : British Medical Journal.

[4]  J. Brown,et al.  The paradox of guideline implementation: how AHCPR's depression guideline was adapted at Kaiser Permanente Northwest Region. , 1995, The Joint Commission journal on quality improvement.

[5]  J. Grimshaw,et al.  Effectiveness and efficiency of guideline dissemination and implementation strategies , 2005, International Journal of Technology Assessment in Health Care.

[6]  J. Denis,et al.  Convergent evolution: The academic and policy roots of collaborative research , 2003, Journal of health services research & policy.

[7]  B. Gray The legislative battle over health services research. , 1992, Health affairs.

[8]  J Frenk,et al.  Balancing relevance and excellence: organizational responses to link research with decision making. , 1992, Social science & medicine.

[9]  N. Weissman,et al.  Patient Outcomes Research Teams and the Agency for Health Care Policy and Research. , 1990, Health services research.

[10]  E. McGlynn,et al.  The quality of health care delivered to adults in the United States. , 2003, The New England journal of medicine.

[11]  N McKoy,et al.  Systems to rate the strength of scientific evidence. , 2002, Evidence report/technology assessment.

[12]  Alastair Baker,et al.  Crossing the Quality Chasm: A New Health System for the 21st Century , 2001, BMJ : British Medical Journal.

[13]  K P Krages,et al.  Rehabilitation for traumatic brain injury in children and adolescents. , 1999, Evidence report/technology assessment.

[14]  F. Song,et al.  Evaluating non-randomised intervention studies. , 2003, Health technology assessment.

[15]  H. Davies,et al.  Increasing research impact through partnerships: Evidence from outside health care , 2003, Journal of health services research & policy.

[16]  K Walshe,et al.  Evidence-based management: from theory to practice in health care. , 2001, The Milbank quarterly.

[17]  Novel mode of knowledge production? Producers and consumers in health services research , 2003, Journal of health services research & policy.

[18]  H. Fineberg,et al.  Understanding Risk: Informing Decisions in a Democratic Society , 1996 .

[19]  D C McCrory,et al.  Mathematical model for the natural history of human papillomavirus infection and cervical carcinogenesis. , 2000, American journal of epidemiology.

[20]  S. Dewilde,et al.  The Cost-Effectiveness of Screening Programs Using Single and Multiple Birth Cohort Simulations: A Comparison Using a Model of Cervical Cancer , 2004, Medical decision making : an international journal of the Society for Medical Decision Making.

[21]  V. Hasselblad,et al.  Evaluation of cervical cytology. , 1999, Evidence report/technology assessment.

[22]  Kathleen N. Lohr,et al.  A Systematic Review of the Literature , 2004 .

[23]  D. Stryer,et al.  The outcomes of outcomes and effectiveness research: impacts and lessons from the first decade. , 2000, Health services research.

[24]  Banta Hd,et al.  The political dimension in health care technology assessment programs. , 1990 .

[25]  Nora Jacobson,et al.  Linkage and exchange at the organizational level: A model of collaboration between research and policy , 2003, Journal of health services research & policy.

[26]  K. Kerlikowske,et al.  Risk of cervical cancer associated with extending the interval between cervical-cancer screenings. , 2003, The New England journal of medicine.

[27]  Amy L. Pablo,et al.  Toward a communicative perspective of collaborating in research: The case of the researcher-decision-maker partnership , 2003, Journal of health services research & policy.

[28]  Darren A. DeWalt,et al.  Literacy and health outcomes , 2006, Journal of General Internal Medicine.

[29]  David Atkins,et al.  Challenges in Using Nonrandomized Studies in Systematic Reviews of Treatment Interventions , 2005, Annals of Internal Medicine.

[30]  Eduard Bonet,et al.  Sharing and expanding academic and practitioner knowledge in health care , 2003, Journal of health services research & policy.

[31]  M. Maglione,et al.  Pharmacological and surgical treatment of obesity. , 2004, Evidence report/technology assessment.

[32]  A. Oxman,et al.  Health policy-makers' perceptions of their use of evidence: a systematic review , 2002, Journal of health services research & policy.

[33]  M. Boyle,et al.  Treatment of attention-deficit/hyperactivity disorder. , 1999, Evidence report/technology assessment.

[34]  D. Grady,et al.  Results of systematic review of research on diagnosis and treatment of coronary heart disease in women. , 2003, Evidence report/technology assessment.

[35]  B. Gray,et al.  AHCPR and the changing politics of health services research. , 2003, Health affairs.

[36]  G. Samsa,et al.  Dissemination of Evidence-based Practice Center Reports , 2005, Annals of Internal Medicine.

[37]  M. Maglione,et al.  Ephedra and ephedrine for weight loss and athletic performance enhancement: clinical efficacy and side effects. , 2003, Evidence report/technology assessment.

[38]  M. McDonagh,et al.  Effectiveness and cost-effectiveness of echocardiography and carotid imaging in the management of stroke. , 2002, Evidence report/technology assessment.

[39]  Janet M. Corrigan,et al.  Priority areas for national action : transforming health care quality , 2003 .

[40]  Douglas K Owens,et al.  Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies (Vol. 1: Series Overview and Methodology) , 2004 .

[41]  L S Chan,et al.  Diagnosis, natural history, and late effects of otitis media with effusion. , 2002, Evidence report/technology assessment.

[42]  N Gavin,et al.  Perinatal Depression : Prevalence , Screening Accuracy , and Screening Outcomes Summary , 2005 .