暂无分享,去创建一个
Honglak Lee | Sameer Singh | Han Zhang | Zizhao Zhang | Zhengli Zhao | Augustus Odena | Han Zhang | Honglak Lee | Augustus Odena | Han Zhang | Zizhao Zhang | Sameer Singh | Zhengli Zhao
[1] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[2] Philipp Krähenbühl,et al. Don't let your Discriminator be fooled , 2018, International Conference on Learning Representations.
[3] Augustus Odena,et al. Open Questions about Generative Adversarial Networks , 2019, Distill.
[4] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[5] Dustin Tran,et al. Hierarchical Implicit Models and Likelihood-Free Variational Inference , 2017, NIPS.
[6] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[7] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[8] Xiang Wei,et al. Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect , 2018, ICLR.
[9] Jacob Abernethy,et al. On Convergence and Stability of GANs , 2018 .
[10] Alexandros G. Dimakis,et al. Your Local GAN: Designing Two Dimensional Local Attention Mechanisms for Generative Models , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Jaakko Lehtinen,et al. Training Generative Adversarial Networks with Limited Data , 2020, NeurIPS.
[12] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[13] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[14] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[15] Dustin Tran,et al. Deep and Hierarchical Implicit Models , 2017, ArXiv.
[16] Colin Raffel,et al. Is Generator Conditioning Causally Related to GAN Performance? , 2018, ICML.
[17] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[18] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Colin Raffel,et al. Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples , 2020, NeurIPS.
[20] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[21] Graham W. Taylor,et al. Improved Regularization of Convolutional Neural Networks with Cutout , 2017, ArXiv.
[22] Yan Wu,et al. LOGAN: Latent Optimisation for Generative Adversarial Networks , 2019, ArXiv.
[23] Quoc V. Le,et al. Unsupervised Data Augmentation for Consistency Training , 2019, NeurIPS.
[24] Xiaohua Zhai,et al. A Large-Scale Study on Regularization and Normalization in GANs , 2018, ICML.
[25] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[26] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[27] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[28] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[29] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[30] Jae Hyun Lim,et al. Geometric GAN , 2017, ArXiv.
[31] Yusheng Xie,et al. Temporal-Aware Self-Supervised Learning for 3D Hand Pose and Mesh Estimation in Videos , 2020, 2021 IEEE Winter Conference on Applications of Computer Vision (WACV).
[32] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[33] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[34] Timo Aila,et al. A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[35] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[36] Mario Lucic,et al. Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.
[37] Ian J. Goodfellow,et al. Skill Rating for Generative Models , 2018, ArXiv.
[38] Song Han,et al. Differentiable Augmentation for Data-Efficient GAN Training , 2020, NeurIPS.
[39] Colin Raffel,et al. Realistic Evaluation of Deep Semi-Supervised Learning Algorithms , 2018, NeurIPS.
[40] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[41] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[42] Yusheng Xie,et al. MVHM: A Large-Scale Multi-View Hand Mesh Benchmark for Accurate 3D Hand Pose Estimation , 2020, 2021 IEEE Winter Conference on Applications of Computer Vision (WACV).
[43] Seunghoon Hong,et al. Diversity-Sensitive Conditional Generative Adversarial Networks , 2019, ICLR.
[44] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[45] Sebastian Nowozin,et al. Stabilizing Training of Generative Adversarial Networks through Regularization , 2017, NIPS.
[46] Honglak Lee,et al. Consistency Regularization for Generative Adversarial Networks , 2020, ICLR.
[47] Jaakko Lehtinen,et al. Analyzing and Improving the Image Quality of StyleGAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[48] Yoshua Bengio,et al. Small-GAN: Speeding Up GAN Training Using Core-sets , 2019, ICML.
[49] Sameer Singh,et al. Image Augmentations for GAN Training , 2020, ArXiv.
[50] Alexander Kolesnikov,et al. S4L: Self-Supervised Semi-Supervised Learning , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).