Score-Guided Generative Adversarial Networks

We propose a generative adversarial network (GAN) that introduces an evaluator module using pretrained networks. The proposed model, called a score-guided GAN (ScoreGAN), is trained using an evaluation metric for GANs, i.e., the Inception score, as a rough guide for the training of the generator. Using another pretrained network instead of the Inception network, ScoreGAN circumvents overfitting of the Inception network such that the generated samples do not correspond to adversarial examples of the Inception network. In addition, evaluation metrics are employed only in an auxiliary role to prevent overfitting. When evaluated using the CIFAR-10 dataset, ScoreGAN achieved an Inception score of 10.36 ± 0.15, which corresponds to state-of-the-art performance. To generalize the effectiveness of ScoreGAN, the model was evaluated further using another dataset, CIFAR-100. ScoreGAN outperformed other existing methods, achieving a Fréchet Inception distance (FID) of 13.98.

[1]  J. Seok,et al.  Inverse design of nanophotonic devices using generative adversarial networks , 2022, Eng. Appl. Artif. Intell..

[2]  Minyoung Park,et al.  HRGAN: A Generative Adversarial Network Producing Higher-Resolution Images than Training Sets , 2022, Sensors.

[3]  Mamta Mittal,et al.  Generative adversarial network: An overview of theory and applications , 2021, Int. J. Inf. Manag. Data Insights.

[4]  Junhee Seok,et al.  Improved recurrent generative adversarial networks with regularization techniques and a controllable framework , 2020, Inf. Sci..

[5]  Junhee Seok,et al.  Estimation with Uncertainty via Conditional Generative Adversarial Networks , 2020, Sensors.

[6]  Pieter Abbeel,et al.  Denoising Diffusion Probabilistic Models , 2020, NeurIPS.

[7]  Sanja Fidler,et al.  Learning to Simulate Dynamic Environments With GameGAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Junhee Seok,et al.  Regularization Methods for Generative Adversarial Networks: An Overview of Recent Studies , 2020, ArXiv.

[9]  R. Chellappa,et al.  cGANs with Multi-Hinge Loss , 2019, ArXiv.

[10]  Ngai-Man Cheung,et al.  Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game , 2019, NeurIPS.

[11]  Timo Aila,et al.  A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Jeff Donahue,et al.  Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.

[13]  Paul Babyn,et al.  Generative Adversarial Network in Medical Imaging: A Review , 2018, Medical Image Anal..

[14]  Cordelia Schmid,et al.  How good is my GAN? , 2018, ECCV.

[15]  Xi Zhang,et al.  CAGAN: Consistent Adversarial Training Enhanced GANs , 2018, IJCAI.

[16]  Han Zhang,et al.  Self-Attention Generative Adversarial Networks , 2018, ICML.

[17]  Yuichi Yoshida,et al.  Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.

[18]  Takeru Miyato,et al.  cGANs with Projection Discriminator , 2018, ICLR.

[19]  Zhaoning Zhang,et al.  Fd-Mobilenet: Improved Mobilenet with a Fast Downsampling Strategy , 2018, 2018 25th IEEE International Conference on Image Processing (ICIP).

[20]  Mark Sandler,et al.  MobileNetV2: Inverted Residuals and Linear Bottlenecks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[21]  Rishi Sharma,et al.  A Note on the Inception Score , 2018, ArXiv.

[22]  Pan He,et al.  Adversarial Examples: Attacks and Defenses for Deep Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.

[23]  Chung-Yen Su,et al.  An Enhanced Hybrid MobileNet , 2017, 2018 9th International Conference on Awareness Science and Technology (iCAST).

[24]  Tom White,et al.  Generative Adversarial Networks: An Overview , 2017, IEEE Signal Processing Magazine.

[25]  Pablo M. Granitto,et al.  Class-Splitting Generative Adversarial Networks , 2017, ArXiv.

[26]  Junhee Seok,et al.  Controllable Generative Adversarial Network , 2017, IEEE Access.

[27]  Sepp Hochreiter,et al.  GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.

[28]  Jae Hyun Lim,et al.  Geometric GAN , 2017, ArXiv.

[29]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[30]  Alexei A. Efros,et al.  Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[31]  Dilin Wang,et al.  Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning , 2016, ArXiv.

[32]  Jonathon Shlens,et al.  Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.

[33]  Jonathon Shlens,et al.  A Learned Representation For Artistic Style , 2016, ICLR.

[34]  Geoffrey E. Hinton,et al.  Layer Normalization , 2016, ArXiv.

[35]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[36]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[37]  Jonathon Shlens,et al.  Explaining and Harnessing Adversarial Examples , 2014, ICLR.

[38]  Michael S. Bernstein,et al.  ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.

[39]  Aaron C. Courville,et al.  Generative adversarial networks , 2014, Commun. ACM.