In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem. More precisely, we constructed a binary classification task for which (i) a robust classifier exists; yet no non-trivial accuracy can be obtained with an efficient algorithm in (ii) the statistical query model. In the present paper we significantly strengthen both (i) and (ii): we now construct a task which admits (i') a maximally robust classifier (that is it can tolerate perturbations of size comparable to the size of the examples themselves); and moreover we prove computational hardness of learning this task under (ii') a standard cryptographic assumption.
[1]
Vijay V. Vazirani,et al.
Trapdoor pseudo-random number generators, with applications to protocol design
,
1983,
24th Annual Symposium on Foundations of Computer Science (sfcs 1983).
[2]
Manuel Blum,et al.
A Simple Unpredictable Pseudo-Random Number Generator
,
1986,
SIAM J. Comput..
[3]
Oded Goldreich,et al.
Computational complexity - a conceptual perspective
,
2008
.
[4]
Ilya P. Razenshteyn,et al.
Adversarial examples from computational constraints
,
2018,
ICML.