Engineering pupil function for optical adversarial attacks.
暂无分享,去创建一个
Adversarial attacks inject imperceptible noise to images to deteriorate the performance of deep image classification models. However, most of the existing studies consider attacks in the digital (pixel) domain where an image acquired by an image sensor with sampling and quantization is recorded. This paper, for the first time, introduces a scheme for optical adversarial attack, which physically alters the light field information arriving at the image sensor so that the classification model yields misclassification. We modulate the phase of the light in the Fourier domain using a spatial light modulator placed in the photographic system. The operative parameters of the modulator for adversarial attack are obtained by gradient-based optimization to maximize cross-entropy and minimize distortion. Experiments based on both simulation and a real optical system demonstrate the feasibility of the proposed optical attack. We show that our attack can conceal perturbations in the image more effectively than the existing pixel-domain attack. It is also verified that the proposed attack is completely different from common optical aberrations such as spherical aberration, defocus, and astigmatism in terms of both perturbation patterns and classification results.