The Adversarial Attacks Threats on Computer Vision: A Survey

Deep learning has shown impressive performance in various complex problems which difficult to the conventional machine learning algorithms, especially in such domains: Medical diagnosis, self-driving, identity verification, social media, and agriculture. However, based on recent research, deep learning algorithms are vulnerable to adversarial attacks with slightly crafted perturbations. The adversarial examples mislead the classifiers to targeted or non-targeted categories, and the perturbations are challenging to be perceptible from the human perspective. The threat raises the severe security problem in deep learning applications. This study aims to summarize recent research for generating adversarial examples in different aspects and reviewed the applications in the physical world. We further considered the difficulties of model robustness and potential countermeasures.