A Comparison of Evolutionary and Neural Attention Modeling Relative to Adversarial Learning

State-of-the-art machine learning, for computer vision applications, is based on data-driven feature learning. While these extraction paradigms often yield impressive results that outperform human hand-crafted solutions, they unfortunately suffer from a lack of explainability. In response to this, both the neural network and evolutionary communities have provided techniques tailored to visually explain how machines process new observations. Examples range from gradient-weighted class activation mapping to guided backpropagation, and from convolutional matrix transpose to improved evolutionary constructed computing. Previously, we put forth a framework called the Adversarial Modifier Set (AMS). In AMS, adversarial imagery is generated based on evolutionary feature identified regions. In this article, we seek to improve both AMS and general adversarial systems by performing a comparative analysis between neural attention modeling techniques and our previously used evolutionary strategy. Preliminary results on a computer vision dataset show that while the neural techniques are faster, evolutionary algorithms yield diverse and higher fidelity attention maps that give rise to improved features for adversarial learning.