JNCI | Editorials 1419 The process to discover and develop molecular biomarkers for cancer diagnosis (or prognosis) is a work in progress and is evolving. “Discovery” research — searching for markers by use of highthroughput technology without an a priori hypothesis — has produced few useful biomarkers over the last 10 years despite numerous claims in research publications and news reports about “95% sensitivity and 95% specificity” or better. The degree of disconnect between claims and products should make us ask why progress is so slow: Is it the normal stop-and-start of science? Or is there some systemic problem with the process that we currently use to discover and develop markers? After all, it took decades to evolve the process to discover and develop drug therapies. So, where does the evolution of the discovery and development of biomarkers stand and how can we improve the process? Although the process is complicated, there are two major questions to address in research. First, can a marker discriminate well? That is, can it discriminate cleanly and reliably (eg, reproducibly and not due to artifact or bias) between persons with early-stage cancer vs those without cancer (or good vs bad prognosis)? Second, does that discrimination, when coupled with an intervention such as surgery or chemotherapy, lead to improved outcome? Answering the second question requires a randomized clinical trial, a design that is powerful and well understood but also expensive and time consuming. A randomized clinical trial is not even considered until a marker clearly shows reliable discrimination that warrants such an effort. Failures in current marker research occur not at the randomized clinical trial stage but earlier, when results that appear to have promising discrimination in early discovery turn out not even to be reproducible. Pepe et al. ( 1 ) proposed a formal structure in 2001 “to guide the process of biomarker development” consisting of fi ve “phases [that] are generally ordered according to the strength of evidence that each provides in favor of the biomarker, from weakest to strongest [and] the results of earlier phases are generally necessary to design later phases.” The phase structure has been widely adopted for use in various research projects, including, as noted by Feng et al. ( 2 ), “by the EDRN [the National Cancer Institute’s (NCI) Early Detection Research Network], a number of Specialized Program of Research Excellence (SPORE) consor
[1]
Frank Buntinx,et al.
The evidence base of clinical diagnosis
,
2008
.
[2]
Holly Janes,et al.
Pivotal Evaluation of the Accuracy of a Biomarker Used for Classification or Prediction: Standards for Study Design
,
2008,
Journal of the National Cancer Institute.
[3]
David F Ransohoff,et al.
How to improve reliability and efficiency of research about molecular markers: roles of phases, guidelines, and study design.
,
2007,
Journal of clinical epidemiology.
[4]
M S Pepe,et al.
Phases of biomarker development for early detection of cancer.
,
2001,
Journal of the National Cancer Institute.
[5]
Ross Prentice,et al.
Research issues and strategies for genomic and proteomic biomarker discovery and validation: a statistical perspective.
,
2004,
Pharmacogenomics.
[6]
T. Eberlein.
A Multigene Assay to Predict Recurrence of Tamoxifen-Treated, Node-Negative Breast Cancer
,
2006
.
[7]
M. Cronin,et al.
A multigene assay to predict recurrence of tamoxifen-treated, node-negative breast cancer.
,
2004,
The New England journal of medicine.
[8]
Christopher F. Martin,et al.
Assessment of Serum Proteomics to Detect Large Colon Adenomas
,
2008,
Cancer Epidemiology, Biomarkers and Prevention.
[9]
J. Knottnerus,et al.
Assessment of the accuracy of diagnostic tests: the cross-sectional study.
,
2003,
Journal of clinical epidemiology.