Improved Consistency Regularization for GANs
暂无分享,去创建一个
[1] Dustin Tran,et al. Hierarchical Implicit Models and Likelihood-Free Variational Inference , 2017, NIPS.
[2] Song Han,et al. Differentiable Augmentation for Data-Efficient GAN Training , 2020, NeurIPS.
[3] Yusheng Xie,et al. Temporal-Aware Self-Supervised Learning for 3D Hand Pose and Mesh Estimation in Videos , 2020, 2021 IEEE Winter Conference on Applications of Computer Vision (WACV).
[4] Alexandros G. Dimakis,et al. Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Augustus Odena,et al. Open Questions about Generative Adversarial Networks , 2019, Distill.
[6] Timo Aila,et al. A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[7] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[8] Tero Karras,et al. Training Generative Adversarial Networks with Limited Data , 2020, NeurIPS.
[9] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[10] Yusheng Xie,et al. MVHM: A Large-Scale Multi-View Hand Mesh Benchmark for Accurate 3D Hand Pose Estimation , 2020, 2021 IEEE Winter Conference on Applications of Computer Vision (WACV).
[11] Colin Raffel,et al. Realistic Evaluation of Deep Semi-Supervised Learning Algorithms , 2018, NeurIPS.
[12] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[13] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[14] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[15] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[16] Dustin Tran,et al. Deep and Hierarchical Implicit Models , 2017, ArXiv.
[17] Honglak Lee,et al. Consistency Regularization for Generative Adversarial Networks , 2020, ICLR.
[18] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[19] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[20] Xiaohua Zhai,et al. A Large-Scale Study on Regularization and Normalization in GANs , 2018, ICML.
[21] Colin Raffel,et al. Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples , 2020, NeurIPS.
[22] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Philipp Krähenbühl,et al. Don't let your Discriminator be fooled , 2018, International Conference on Learning Representations.
[24] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[25] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[26] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[27] Yoshua Bengio,et al. Small-GAN: Speeding Up GAN Training Using Core-sets , 2019, ICML.
[28] Seunghoon Hong,et al. Diversity-Sensitive Conditional Generative Adversarial Networks , 2019, ICLR.
[29] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[30] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[31] C. Villani. Optimal Transport: Old and New , 2008 .
[32] Ian J. Goodfellow,et al. Skill Rating for Generative Models , 2018, ArXiv.
[33] Jae Hyun Lim,et al. Geometric GAN , 2017, ArXiv.
[34] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[35] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[36] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[37] Sameer Singh,et al. Image Augmentations for GAN Training , 2020, ArXiv.
[38] Quoc V. Le,et al. Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.
[39] Colin Raffel,et al. Is Generator Conditioning Causally Related to GAN Performance? , 2018, ICML.
[40] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[41] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[42] Mario Lucic,et al. Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.
[43] Sebastian Nowozin,et al. Stabilizing Training of Generative Adversarial Networks through Regularization , 2017, NIPS.
[44] Jaakko Lehtinen,et al. Analyzing and Improving the Image Quality of StyleGAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[45] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[46] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[47] Xiang Wei,et al. Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect , 2018, ICLR.
[48] Jacob Abernethy,et al. On Convergence and Stability of GANs , 2018 .
[49] Alexander Kolesnikov,et al. S4L: Self-Supervised Semi-Supervised Learning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[50] Yan Wu,et al. LOGAN: Latent Optimisation for Generative Adversarial Networks , 2019, ArXiv.
[51] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.