Grading quality of evidence and strength of recommendations for diagnostic tests and strategies.

Faced with the plethora of new diagnostic and therapeutic interventions, busy physicians need clear guidance on the best approaches to follow for their patients. This need has led to such a proliferation of practice guidelines (PGs)1 that for diabetes mellitus alone, for example, more than 150 guidelines are available worldwide. In the “jungle” of PGs, many provide conflicting guidance, and the literature displays extensive variation in the approaches to formulating recommendations. Therefore, there is an international move toward standardizing guideline methodology so that recommendations are conceived in a systematic and transparent process and that the link between the evidence and the strength of recommendations is explicitly documented. This commentary provides a brief overview of the principles for assessing the strength of evidence and the challenges guideline developers face in formulating graded recommendations related to the use of laboratory tests. Guidelines aim to close the gap between research and practice and to provide rigorously developed, valid, and applicable recommendations for achieving the best possible outcomes. The formulation of evidence-based guidelines implies a process in which the body of evidence has been systematically explored, its quality critically evaluated, and the research findings synthesized and translated into recommendations for best practice. In PGs, quality of evidence indicates the degree of confidence that the evidence is adequate to support recommendations. Quality of evidence can be judged by considering the following aspects (1): 1. Study design usually defines the level of evidence. For example, questions on the efficacy of treatment are best answered by randomized controlled trials (RCTs), and questions about diagnostic accuracy are best addressed by properly designed prospective cohort studies. 2. Internal validity refers to a lack of design-related biases that could threaten the soundness of the study. In diagnostic accuracy studies, various forms of verification biases, spectrum bias, or review bias can lead …

[1]  J. Sch GRADE: grading quality of evidence and strength of recommendations for diagnostic tests and strategies , 2008, BMJ : British Medical Journal.

[2]  G. Guyatt,et al.  GRADE: an emerging consensus on rating quality of evidence and strength of recommendations , 2008, BMJ : British Medical Journal.

[3]  N McKoy,et al.  Systems to rate the strength of scientific evidence. , 2002, Evidence report/technology assessment.

[4]  Ross Upshur Are all evidence-based practices alike? Problems in the ranking of evidence. , 2003, CMAJ : Canadian Medical Association journal = journal de l'Association medicale canadienne.