The impact of target luminance and radiance on night vision device visual performance testing

Visual performance through night-vision devices (NVDs) is a function of many parameters such as target contrast, objective and eyepiece lens focus, signal/noise of the image intensifier tube, quality of the image intensifier, night-vision goggle (NVG) gain, and NVG output luminance to the eye. The NVG output luminance depends on the NVG sensitive radiance emitted (or reflected) from the visual acuity target (usually a vision testing chart). The primary topic of this paper is the standardization (or lack thereof) of the radiance levels used for NVG visual acuity testing. The visual acuity chart light level might be determined in either photometric (luminance) units or radiometric (radiance) units. The light levels are often described as “starlight,” “quarter moon,” or “optimum” light levels and may not actually provide any quantitative photometric or radiometric information. While these terms may be useful to pilots and the users of night-vision devices, they are inadequate for accurate visual performance testing. This is because there is no widely accepted agreement in the night vision community as to the radiance or luminance level of the target that corresponds to the various named light levels. This paper examines the range of values for “starlight,” “quarter moon,” and “optimum” light commonly used by the night vision community and referenced in the literature. The impact on performance testing of variations in target luminance/radiance levels is also examined. Arguments for standardizing on NVG-weighted radiometric units for testing night-vision devices instead of photometric units are presented. In addition, the differences between theoretical weighted radiance and actual weighted radiance are also discussed.