Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective
暂无分享,去创建一个
[1] Lawrence Carin,et al. On Leveraging Pretrained GANs for Generation with Limited Data , 2020, ICML.
[2] Xiaohua Zhai,et al. Self-Supervised GANs via Auxiliary Rotation Loss , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Mohammad Norouzi,et al. Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.
[4] Ting Chen,et al. Robust Pre-Training by Adversarial Contrastive Learning , 2020, NeurIPS.
[5] Jaakko Lehtinen,et al. Analyzing and Improving the Image Quality of StyleGAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Tianlong Chen,et al. GANs Can Play Lottery Tickets Too , 2021, ICLR.
[7] Quoc V. Le,et al. Adversarial Examples Improve Image Recognition , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Shiyu Chang,et al. The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Jinwoo Shin,et al. Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs , 2020, 2002.10964.
[10] Abhishek Kumar,et al. Few-Shot Adaptation of Generative Adversarial Networks , 2020, ArXiv.
[11] Tianlong Chen,et al. Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free , 2020, NeurIPS.
[12] Soheil Feizi,et al. Winning Lottery Tickets in Deep Generative Models , 2021, AAAI.
[13] Michael Carbin,et al. The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , 2018, ICLR.
[14] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[15] Song Han,et al. Differentiable Augmentation for Data-Efficient GAN Training , 2020, NeurIPS.
[16] Yu Cheng,et al. FreeLB: Enhanced Adversarial Training for Natural Language Understanding , 2020, ICLR.
[17] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[18] Preetum Nakkiran,et al. Adversarial Robustness May Be at Odds With Simplicity , 2019, ArXiv.
[19] Timo Aila,et al. A Style-Based Generator Architecture for Generative Adversarial Networks , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Jaesik Park,et al. ContraGAN: Contrastive Learning for Conditional Image Generation , 2020, NeurIPS.
[21] Zhanxing Zhu,et al. Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors , 2019, ArXiv.
[22] Yoshua Bengio,et al. Small-GAN: Speeding Up GAN Training Using Core-sets , 2019, ICML.
[23] Ding Liu,et al. EnlightenGAN: Deep Light Enhancement Without Paired Supervision , 2019, IEEE Transactions on Image Processing.
[24] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[25] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[26] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[27] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[28] Dacheng Tao,et al. On Positive-Unlabeled Classification in GAN , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Shrey Desai,et al. Evaluating Lottery Tickets Under Distributional Shifts , 2019, EMNLP.
[30] Song-Chun Zhu,et al. Learning Hybrid Image Templates (HIT) by Information Projection , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[31] Xiaohua Zhai,et al. High-Fidelity Image Generation With Fewer Labels , 2019, ICML.
[32] Zhangyang Wang,et al. DeblurGAN-v2: Deblurring (Orders-of-Magnitude) Faster and Better , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[33] Zhangyang Wang,et al. Efficient Lottery Ticket Finding: Less Data is More , 2021, ICML.
[34] Eli Shechtman,et al. Few-shot Image Generation with Elastic Weight Consolidation , 2020, NeurIPS.
[35] Shiyu Chang,et al. The Lottery Ticket Hypothesis for Pre-trained BERT Networks , 2020, NeurIPS.
[36] Erich Elsen,et al. The Difficulty of Training Sparse Neural Networks , 2019, ArXiv.
[37] Erich Elsen,et al. The State of Sparsity in Deep Neural Networks , 2019, ArXiv.
[38] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[39] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[40] David Bau,et al. Diverse Image Generation via Self-Conditioned GANs , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[41] Amos J. Storkey,et al. Data Augmentation Generative Adversarial Networks , 2017, ICLR 2018.
[42] Zhe Gan,et al. EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets , 2020, ACL.
[43] M. Maire,et al. Winning the Lottery with Continuous Sparsification , 2019, NeurIPS.
[44] Honglak Lee,et al. Consistency Regularization for Generative Adversarial Networks , 2020, ICLR.
[45] Zhangyang Wang,et al. A Unified Lottery Ticket Hypothesis for Graph Neural Networks , 2021, ICML.
[46] Dan Zhang,et al. PA-GAN: Improving GAN Training by Progressive Augmentation , 2019, ArXiv.
[47] Zhiqiang Shen,et al. Learning Efficient Convolutional Networks through Network Slimming , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[48] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[49] Fahad Shahbaz Khan,et al. MineGAN: Effective Knowledge Transfer From GANs to Target Domains With Few Images , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[50] Hung-Yu Tseng,et al. Regularizing Generative Adversarial Networks under Limited Data , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[51] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[52] Jiayu Wu,et al. Tiny ImageNet Challenge , 2017 .
[53] Rob Fergus,et al. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.
[54] Michael Carbin,et al. Comparing Rewinding and Fine-tuning in Neural Network Pruning , 2019, ICLR.
[55] Tomas Pfister,et al. Learning from Simulated and Unsupervised Images through Adversarial Training , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[56] Joan Bruna,et al. Intriguing properties of neural networks , 2013, ICLR.
[57] Roger B. Grosse,et al. Picking Winning Tickets Before Training by Preserving Gradient Flow , 2020, ICLR.
[58] Shiyu Chang,et al. AutoGAN: Neural Architecture Search for Generative Adversarial Networks , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[59] Gintare Karolina Dziugaite,et al. Linear Mode Connectivity and the Lottery Ticket Hypothesis , 2019, ICML.
[60] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[61] Bogdan Raducanu,et al. Transferring GANs: generating images from limited data , 2018, ECCV.
[62] Yiming Yang,et al. MMD GAN: Towards Deeper Understanding of Moment Matching Network , 2017, NIPS.
[63] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[64] Mauricio J. Serrano,et al. The Sooner The Better: Investigating Structure of Early Winning Lottery Tickets , 2019 .
[65] Michael I. Jordan,et al. Theoretically Principled Trade-off between Robustness and Accuracy , 2019, ICML.
[66] Yu Cheng,et al. Large-Scale Adversarial Training for Vision-and-Language Representation Learning , 2020, NeurIPS.
[67] Chao Xu,et al. Distilling portable Generative Adversarial Networks for Image Translation , 2020, AAAI.
[68] Takeru Miyato,et al. cGANs with Projection Discriminator , 2018, ICLR.
[69] Yue Wang,et al. Drawing early-bird tickets: Towards more efficient training of deep networks , 2019, ICLR.
[70] Zhangyang Wang,et al. Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning , 2021, ICLR.
[71] Tero Karras,et al. Training Generative Adversarial Networks with Limited Data , 2020, NeurIPS.
[72] Léon Bottou,et al. Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.
[73] Aleksander Madry,et al. Adversarially Robust Generalization Requires More Data , 2018, NeurIPS.
[74] Dilin Wang,et al. Improving Neural Language Modeling via Adversarial Training , 2019, ICML.
[75] Yuandong Tian,et al. Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP , 2019, ICLR.
[76] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[77] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[78] Terrance DeVries,et al. Instance Selection for GANs , 2020, NeurIPS.
[79] Tianlong Chen,et al. Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference , 2020, ICLR.
[80] Zhenan Sun,et al. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications , 2020, IEEE Transactions on Knowledge and Data Engineering.
[81] Tatsuya Harada,et al. Image Generation From Small Datasets via Batch Statistics Adaptation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[82] Zhe Gan,et al. Adversarial Feature Augmentation and Normalization for Visual Recognition , 2021, Trans. Mach. Learn. Res..
[83] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[84] Aleksander Madry,et al. Robustness May Be at Odds with Accuracy , 2018, ICLR.
[85] Aditi Raghunathan,et al. Adversarial Training Can Hurt Generalization , 2019, ArXiv.
[86] Bernt Schiele,et al. Disentangling Adversarial Robustness and Generalization , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[87] Yu Cheng,et al. Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[88] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[89] Colin Wei,et al. Improved Sample Complexities for Deep Networks and Robust Classification via an All-Layer Margin , 2019, ICLR.
[90] David J. Schwab,et al. The Early Phase of Neural Network Training , 2020, ICLR.
[91] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[92] Joan Bruna,et al. Few-Shot Learning with Graph Neural Networks , 2017, ICLR.
[93] Ngai-Man Cheung,et al. Towards Good Practices for Data Augmentation in GAN Training , 2020, ArXiv.
[94] Honglak Lee,et al. Improved Consistency Regularization for GANs , 2021, AAAI.
[95] Yizhe Zhu,et al. Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis , 2021, ICLR.
[96] Dimitris N. Metaxas,et al. StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[97] Samy Bengio,et al. Adversarial Machine Learning at Scale , 2016, ICLR.