Not All Pixels are Born Equal: An Analysis of Evasion Attacks under Locality Constraints

Deep neural networks (DNNs) have enabled success in learning tasks such as image classification, semantic image segmentation and steering angle prediction which can be key components of the computer vision pipeline of safety-critical systems such as autonomous vehicles. However, previous work has demonstrated the feasibility of using physical adversarial examples to attack image classification systems. \par In this work, we argue that the success of realistic adversarial examples is highly dependent on both the structure of the training data and the learning objective. In particular, realistic, physical-world attacks on semantic segmentation and steering angle prediction constrain the adversary to add localized perturbations, since it is very difficult to add perturbations in the entire field of view of input sensors such as cameras for applications like autonomous vehicles. We empirically study the effectiveness of adversarial examples generated under strict locality constraints imposed by the aforementioned applications. Even with image classification, we observe that the success of the adversary under locality constraints depends on the training dataset. With steering angle prediction, we observe that adversarial perturbations localized to an off-road patch are significantly less successful compared to those on-road. For semantic segmentation, we observe that perturbations localized to small patches are only effective at changing the label in and around those patches, making non-local attacks difficult for an adversary. We further provide a comparative evaluation of these localized attacks over various datasets and deep learning models for each task.

[1]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[2]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[3]  Xiaogang Wang,et al.  Pyramid Scene Parsing Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Raquel Urtasun,et al.  Understanding the Effective Receptive Field in Deep Convolutional Neural Networks , 2016, NIPS.

[5]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[6]  Prateek Mittal,et al.  DARTS: Deceiving Autonomous Cars with Toxic Signs , 2018, ArXiv.

[7]  Martín Abadi,et al.  Adversarial Patch , 2017, ArXiv.

[8]  Lujo Bauer,et al.  Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.

[9]  George Papandreou,et al.  Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation , 2018, ECCV.

[10]  Sebastian Ramos,et al.  The Cityscapes Dataset for Semantic Urban Scene Understanding , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.