Detection and recognition of pure tones in noise.

We examine the predictions of a new theorem relating signal identification (specifying a signal as a particular member of a set of potential signals) to signal detection (discriminating the presence of a signal). The theorem, derived in the context of signal-detection theory, requires that the signals be equally detectable and orthogonal. Our sinusoidal signals are partially masked by noise and their intensities adjusted to produce equal-signal detectability; we do not examine this assumption of the theorem. The theorem generally provides a reasonably accurate description of recognition performance for two-signal and four-signal conditions and is equally accurate for both the Yes-No and category-rating procedures. In a preliminary investigation of the orthogonality assumption, we varied the frequency separation between two signals. When the frequency separation between two signals is small (20 Hz near 1 kHz), the theorem fails to provide a good description of performance.