Diagnostic Accuracy Measures

Background: An increasing number of diagnostic tests and biomarkers have been validated during the last decades, and this will still be a prominent field of research in the future because of the need for personalized medicine. Strict evaluation is needed whenever we aim at validating any potential diagnostic tool, and the first requirement a new testing procedure must fulfill is diagnostic accuracy. Summary: Diagnostic accuracy measures tell us about the ability of a test to discriminate between and/or predict disease and health. This discriminative and predictive potential can be quantified by measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, area under the receiver operating characteristic curve, overall accuracy and diagnostic odds ratio. Some measures are useful for discriminative purposes, while others serve as a predictive tool. Measures of diagnostic accuracy vary in the way they depend on the prevalence, spectrum and definition of the disease. In general, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or underestimate the indicators of test performance and limit the applicability of the results of the study. Key Messages: The testing procedure should be verified on a reasonable population, including people with mild and severe disease, thus providing a comparable spectrum. Sensitivities and specificities are not predictive measures. Predictive values depend on disease prevalence, and their conclusions can be transposed to other settings only for studies which are based on a suitable population (e.g. screening studies). Likelihood ratios should be an optimal choice for reporting diagnostic accuracy. Diagnostic accuracy measures must be reported with their confidence intervals. We always have to report paired measures (sensitivity and specificity, predictive values or likelihood ratios) for clinically meaningful thresholds. How much discriminative or predictive power we need depends on the clinical diagnostic pathway and on misclassification (false positives/negatives) costs.

[1]  Y. Nagakane,et al.  The Relationship between Neurological Worsening and Lesion Patterns in Patients with Acute Middle Cerebral Artery Stenosis , 2013, Cerebrovascular Diseases.

[2]  K. Moons,et al.  From accuracy to patient outcome and cost-effectiveness evaluations of diagnostic tests and biomarkers: an exemplary modelling study , 2013, BMC Medical Research Methodology.

[3]  P. Bossuyt,et al.  The diagnostic odds ratio: a single indicator of test performance. , 2003, Journal of clinical epidemiology.

[4]  R. Adams,et al.  Transcranial Doppler Correlation With Angiography in Detection of Intracranial Stenosis , 1994, Stroke.

[5]  P. Sandercock,et al.  Blood Biomarkers for the Diagnosis of Acute Cerebrovascular Diseases: A Prospective Cohort Study , 2011, Cerebrovascular Diseases.

[6]  D. Altman,et al.  Diagnostic tests 4: likelihood ratios , 2004, BMJ : British Medical Journal.

[7]  Tom Fearn,et al.  Chemometric Space: Sensitivity and specificity , 2009 .

[8]  A. Alexandrov,et al.  The Accuracy of Transcranial Doppler in the Diagnosis of Middle Cerebral Artery Stenosis , 2007, Cerebrovascular Diseases.

[9]  G. Guyatt,et al.  Tips for learners of evidence-based medicine: 5. The effect of spectrum of disease on the performance of diagnostic tests , 2005, Canadian Medical Association Journal.

[10]  C. Smith Diagnostic tests (1) – sensitivity and specificity , 2012, Phlebology.

[11]  K. Zou,et al.  Receiver-Operating Characteristic Analysis for Evaluating Diagnostic Tests and Predictive Models , 2007, Circulation.

[12]  Purushottam W. Laud,et al.  Diagnostic tests , 2020, Bayesian Thinking in Biostatistics.