Gödel's Sentence Is An Adversarial Example But Unsolvable

In recent years, different types of adversarial examples from different fields have emerged endlessly, including purely natural ones without perturbations. A variety of defenses are proposed and then broken quickly. Two fundamental questions need to be asked: What's the reason for the existence of adversarial examples and are adversarial examples unsolvable? In this paper, we will show the reason for the existence of adversarial examples is there are non-isomorphic natural explanations that can all explain data set. Specifically, for two natural explanations of being true and provable, G\"odel's sentence is an adversarial example but ineliminable. It can't be solved by the re-accumulation of data set or the re-improvement of learning algorithm. Finally, from the perspective of computability, we will prove the incomputability for adversarial examples, which are unrecognizable.

[1]  Mani B. Srivastava,et al.  Generating Natural Language Adversarial Examples , 2018, EMNLP.

[2]  Nikita Vemuri,et al.  Targeted Adversarial Examples for Black Box Audio Systems , 2018, 2019 IEEE Security and Privacy Workshops (SPW).

[3]  Aleksander Madry,et al.  Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.

[4]  Fabio Roli,et al.  Evasion Attacks against Machine Learning at Test Time , 2013, ECML/PKDD.

[5]  Aleksander Madry,et al.  Adversarial Examples Are Not Bugs, They Are Features , 2019, NeurIPS.

[6]  Mykel J. Kochenderfer,et al.  Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks , 2017, CAV.

[7]  Quan Z. Sheng,et al.  Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey , 2019 .

[8]  Tapani Raiko,et al.  European conference on machine learning and knowledge discovery in databases , 2014 .

[9]  Toon Goedemé,et al.  Fooling Automated Surveillance Cameras: Adversarial Patches to Attack Person Detection , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[10]  Luyu Wang,et al.  On the Sensitivity of Adversarial Robustness to Input Data Distributions , 2018, ICLR.

[11]  Christian Gagné,et al.  Robustness to Adversarial Examples through an Ensemble of Specialists , 2017, ICLR.

[12]  Aleksandr Petiushko,et al.  AdvHat: Real-World Adversarial Attack on ArcFace Face ID System , 2019, 2020 25th International Conference on Pattern Recognition (ICPR).

[13]  Preetum Nakkiran,et al.  Adversarial Robustness May Be at Odds With Simplicity , 2019, ArXiv.

[14]  Aleksander Madry,et al.  A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations , 2017, ArXiv.

[15]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[16]  Alexandros G. Dimakis,et al.  Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification , 2018, MLSys.

[17]  Luca Rigazio,et al.  Towards Deep Neural Network Architectures Robust to Adversarial Examples , 2014, ICLR.

[18]  A. M. Turing,et al.  Computing Machinery and Intelligence , 1950, The Philosophy of Artificial Intelligence.

[19]  Iryna Gurevych,et al.  Text Processing Like Humans Do: Visually Attacking and Shielding NLP Systems , 2019, NAACL.

[20]  David A. Forsyth,et al.  SafetyNet: Detecting and Rejecting Adversarial Examples Robustly , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[21]  James Zou,et al.  Towards Automatic Concept-based Explanations , 2019, NeurIPS.

[22]  Hung-Yu Kao,et al.  Probing Neural Network Comprehension of Natural Language Arguments , 2019, ACL.

[23]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[24]  Dacheng Tao,et al.  Perceptual-Sensitive GAN for Generating Adversarial Patches , 2019, AAAI.

[25]  Zhitao Gong,et al.  Strike (With) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[27]  J. Zico Kolter,et al.  Adversarial Music: Real World Audio Adversary Against Wake-word Detection System , 2019, NeurIPS.

[28]  Pan He,et al.  Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[29]  A. Turing On Computable Numbers, with an Application to the Entscheidungsproblem. , 1937 .

[30]  David Bamman,et al.  Adversarial Training for Relation Extraction , 2017, EMNLP.

[31]  K. Crawford Artificial Intelligence's White Guy Problem , 2016 .

[32]  K. Gödel Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I , 1931 .

[33]  K. Gödel Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I , 1931 .

[34]  J. Barkley Rosser,et al.  Extensions of some theorems of Gödel and Church , 1936, Journal of Symbolic Logic.

[35]  Bernt Schiele,et al.  Disentangling Adversarial Robustness and Generalization , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[36]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[37]  Aleksander Madry,et al.  Exploring the Landscape of Spatial Robustness , 2017, ICML.

[38]  Dawn Song,et al.  Natural Adversarial Examples , 2019, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[39]  Colin Raffel,et al.  Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition , 2019, ICML.

[40]  Ilya P. Razenshteyn,et al.  Adversarial examples from computational constraints , 2018, ICML.

[41]  Ananthram Swami,et al.  Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).

[42]  Sameep Mehta,et al.  Towards Crafting Text Adversarial Samples , 2017, ArXiv.

[43]  David A. Wagner,et al.  Audio Adversarial Examples: Targeted Attacks on Speech-to-Text , 2018, 2018 IEEE Security and Privacy Workshops (SPW).

[44]  Tom Goldstein,et al.  Are adversarial examples inevitable? , 2018, ICLR.

[45]  Aleksander Madry,et al.  Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability , 2018, ICLR.

[46]  Nic Ford,et al.  Adversarial Examples Are a Natural Consequence of Test Error in Noise , 2019, ICML.