Machine learning models are powerful but fallible. Generating adversarial examples - inputs deliberately crafted to cause model misclassification or other errors - can yield important insight into model assumptions and vulnerabilities. Despite significant recent work on adversarial example generation targeting image classifiers, relatively little work exists exploring adversarial example generation for text classifiers; additionally, many existing adversarial example generation algorithms require full access to target model parameters, rendering them impractical for many real-world attacks. In this work, we introduce DANCin SEQ2SEQ, a GAN-inspired algorithm for adversarial text example generation targeting largely black-box text classifiers. We recast adversarial text example generation as a reinforcement learning problem, and demonstrate that our algorithm offers preliminary but promising steps towards generating semantically meaningful adversarial text examples in a real-world attack scenario.
[1]
Ananthram Swami,et al.
Practical Black-Box Attacks against Machine Learning
,
2016,
AsiaCCS.
[2]
Logan Engstrom,et al.
Synthesizing Robust Adversarial Examples
,
2017,
ICML.
[3]
Alan Ritter,et al.
Adversarial Learning for Neural Dialogue Generation
,
2017,
EMNLP.
[4]
Joan Bruna,et al.
Intriguing properties of neural networks
,
2013,
ICLR.
[5]
Jonathon Shlens,et al.
Explaining and Harnessing Adversarial Examples
,
2014,
ICLR.
[6]
Percy Liang,et al.
Adversarial Examples for Evaluating Reading Comprehension Systems
,
2017,
EMNLP.