A comparison of transient-evoked and distortion product otoacoustic emissions in normal-hearing and hearing-impaired subjects.

The ability of transient-evoked otoacoustic emissions (TEOAEs) and distortion product otoacoustic emissions (DPOAEs) to distinguish normal hearing from hearing impairment was evaluated in 180 subjects. TEOAEs were analyzed into octave or one-third octave bands for frequencies ranging from 500 to 4000 Hz. Decision theory was used to generate receiver operating characteristic (ROC) curves for each of three measurements (OAE amplitude, OAE/noise, reproducibility) for each OAE measure (octave TEOAEs, 1/3 octave TEOAEs, DPOAEs), for octave frequencies from 500 to 4000 Hz, and for seven audiometric criteria ranging from 10 to 40 dB HL. At 500 Hz, TEOAEs and DPOAEs were unable to separate normal from impaired ears. At 1000 Hz, both TEOAE measures were more accurate in identifying hearing status than DPOAEs. At 2000 Hz, all OAE measures performed equally well. At 4000 Hz, DPOAEs were better able to distinguish normal from impaired ears. Almost without exception, measurements of OAE/noise and reproducibility performed comparably and were superior to measurements of OAE amplitude, although the differences were small. TEOAEs analyzed into octave bands showed better performance than TEOAEs analyzed into 1/3 octaves. Under standard test conditions, OAE test performance appears to be limited by background noise, especially for the low frequencies.