ROC methodology has been expanded in recent years to include multi-disease experiments. To accommodate these changes, different rating formats, general or disease specific, can be used. No experimental data are available concerning the possible effects of the rating format on the results of these studies. We performed a multi-observer, multi-disease study in which 196 chest images were rated using a format where each disease was evaluated individually and one in which the cases were evaluated without scoring a specific disease. The results indicate that for our data set, the overall assessment of accuracy was not significantly affected by the study format used. Thus, in spite of the difficulties in selecting appropriate controls and the necessity of reassessing sample size considerations, the disease-specific format appears to produce an assessment of accuracy equivalent to that produced by the general format. This equivalence permits the use of the disease-specific approach since it more closely simulates the readers' true environment and is more appropriate for comparing imaging systems that may have a relative accuracy that is disease specific.