What is the best index of detectability?

Various indices which have been proposed as measures of detectability (for unequal variance normal distributions of signal and nonsignal) are discussed. It is argued that the best measure is an index, denoted by $d_a$, which is defined as $\sqrt(2)$ times the orthonormal distance from the origin of the normal deviate graph to the straight line receiver operating characteristic. It is shown that $d_a/\sqrt(2)$ is equal to $z(A)$, thenormal transform of the area under the receiver operating characteristic curve, $P(A)$. The effect of changes in the variance of the signal distribution on $d_a$ and competing indices is described. "Nonparametric" indices, appropriate when normality is not assumed, are also discussed and an index based on the difference distribution formed from two rating distributions is proposed. This index, also, is related to the area under the receiver operating characteristic curve. The sampling variability of $z(A)$ was investigated by computer simulation and found to be generally lower than the theoretical sampling variability. Some simplifications are proposed to the conclusions drawn from an earlier simulation.