Deep neural networks (DNNs) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNNs and humans classify patterns and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods. Our article examines these questions within the framework of dense associative memory (DAM) models. These models are defined by the energy function, with higher-order (higher than quadratic) interactions between the neurons. We show that in the limit when the power of the interaction vertex in the energy function is sufficiently large, these models have the following three properties. First, the minima of the objective function are free from rubbish images, so that each minimum is a semantically meaningful pattern. Second, artificial patterns poised precisely at the decision boundary look ambiguous to human subjects and share aspects of both classes that are separated by that decision boundary. Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units, fail to transfer to and fool the models with higher-order interactions. This opens up the possibility of using higher-order models for detecting and stopping malicious adversarial attacks. The results we present suggest that DAMs with higher-order energy functions are more robust to adversarial and rubbish inputs than DNNs with rectified linear units.
[1]
Patrice Y. Simard,et al.
Best practices for convolutional neural networks applied to visual document analysis
,
2003,
Seventh International Conference on Document Analysis and Recognition, 2003. Proceedings..
[2]
Joan Bruna,et al.
Intriguing properties of neural networks
,
2013,
ICLR.
[3]
Jason Yosinski,et al.
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
,
2014,
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4]
Luca Rigazio,et al.
Towards Deep Neural Network Architectures Robust to Adversarial Examples
,
2014,
ICLR.
[5]
Arild Nøkland.
Improving Back-Propagation by Adding an Adversarial Gradient
,
2015,
ArXiv.
[6]
Jonathon Shlens,et al.
Explaining and Harnessing Adversarial Examples
,
2014,
ICLR.
[7]
Shin Ishii,et al.
Distributional Smoothing with Virtual Adversarial Training
,
2015,
ICLR 2016.
[8]
Dale Schuurmans,et al.
Learning with a Strong Adversary
,
2015,
ArXiv.
[9]
Patrick D. McDaniel,et al.
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
,
2016,
ArXiv.
[10]
Ananthram Swami,et al.
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples
,
2016,
ArXiv.
[11]
Wenbo Guo,et al.
Random Feature Nullification for Adversary Resistant Deep Architecture
,
2016,
ArXiv.
[12]
John J. Hopfield,et al.
Dense Associative Memory for Pattern Recognition
,
2016,
NIPS.
[13]
Wenbo Guo,et al.
Adversary Resistant Deep Neural Networks with an Application to Malware Detection
,
2016,
KDD.
[14]
Samy Bengio,et al.
Adversarial examples in the physical world
,
2016,
ICLR.