Spatial evolutionary generative adversarial networks

Generative adversary networks (GANs) suffer from training pathologies such as instability and mode collapse. These pathologies mainly arise from a lack of diversity in their adversarial interactions. Evolutionary generative adversarial networks apply the principles of evolutionary computation to mitigate these problems. We hybridize two of these approaches that promote training diversity. One, E-GAN, at each batch, injects mutation diversity by training the (replicated) generator with three independent objective functions then selecting the resulting best performing generator for the next batch. The other, Lipizzaner, injects population diversity by training a two-dimensional grid of GANs with a distributed evolutionary algorithm that includes neighbor exchanges of additional training adversaries, performance based selection and population-based hyper-parameter tuning. We propose to combine mutation and population approaches to diversity improvement. We contribute a superior evolutionary GANs training method, Mustangs, that eliminates the single loss function used across Lipizzaner's grid. Instead, each training round, a loss function is selected with equal probability, from among the three E-GAN uses. Experimental analyses on standard benchmarks, MNIST and CelebA, demonstrate that Mustangs provides a statistically faster training method resulting in more accurate networks.

[1]  Jerry Li,et al.  Towards Understanding the Dynamics of Generative Adversarial Networks , 2017, ArXiv.

[2]  Abdullah Al-Dujaili,et al.  Lipizzaner: A System That Scales Robust Generative Adversarial Network Training , 2018, ArXiv.

[3]  Abdullah Al-Dujaili,et al.  Towards Distributed Coevolutionary GANs , 2018, ArXiv.

[4]  Minh N. Do,et al.  Semantic Image Inpainting with Deep Generative Models , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Joost van de Weijer,et al.  Ensembles of Generative Adversarial Networks , 2016, ArXiv.

[6]  Edwin D. de Jong,et al.  Coevolutionary Principles , 2012, Handbook of Natural Computing.

[7]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[8]  Behnam Neyshabur,et al.  Stabilizing GAN Training with Multiple Random Projections , 2017, ArXiv.

[9]  Michael D. Thomure,et al.  The Role of Space in the Success of Coevolutionary Learning , 2006 .

[10]  Sepp Hochreiter,et al.  GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.

[11]  Trung Le,et al.  Dual Discriminator Generative Adversarial Nets , 2017, NIPS.

[12]  Eric P. Xing,et al.  Dual Motion GAN for Future-Flow Embedded Video Prediction , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[13]  Suvrit Sra,et al.  Distributional Adversarial Networks , 2017, ICLR.

[14]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[15]  Pascal Bouvry,et al.  Novel efficient asynchronous cooperative co-evolutionary multi-objective algorithms , 2012, 2012 IEEE Congress on Evolutionary Computation.

[16]  Léon Bottou,et al.  Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.

[17]  Raymond Y. K. Lau,et al.  Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[18]  Léon Bottou,et al.  Wasserstein Generative Adversarial Networks , 2017, ICML.

[19]  Bernt Schiele,et al.  Learning What and Where to Draw , 2016, NIPS.

[20]  Melanie Mitchell,et al.  Investigating the success of spatial coevolution , 2005, GECCO '05.

[21]  He Ma,et al.  Generative Adversarial Parallelization , 2016, ArXiv.

[22]  Bernhard Schölkopf,et al.  AdaGAN: Boosting Generative Models , 2017, NIPS.

[23]  Yann LeCun,et al.  Energy-based Generative Adversarial Network , 2016, ICLR.

[24]  Zhe Gan,et al.  Triangle Generative Adversarial Networks , 2017, NIPS.

[25]  Yi Zhang,et al.  Do GANs learn the distribution? Some Theory and Empirics , 2018, ICLR.