暂无分享,去创建一个
[1] John E. Hopcroft,et al. Stacked Generative Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Lucas Theis,et al. Amortised MAP Inference for Image Super-resolution , 2016, ICLR.
[3] Han Zhang,et al. Self-Attention Generative Adversarial Networks , 2018, ICML.
[4] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[5] Honglak Lee,et al. An Analysis of Single-Layer Networks in Unsupervised Feature Learning , 2011, AISTATS.
[6] Jaakko Lehtinen,et al. Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.
[7] Arthur Gretton,et al. On gradient regularizers for MMD GANs , 2018, NeurIPS.
[8] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[9] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[10] Bernhard Schölkopf,et al. A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..
[11] Yong Yu,et al. Activation Maximization Generative Adversarial Nets , 2017 .
[12] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[13] Stefano Ermon,et al. Generative Adversarial Imitation Learning , 2016, NIPS.
[14] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[15] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[16] Alan Ritter,et al. Adversarial Learning for Neural Dialogue Generation , 2017, EMNLP.
[17] Dustin Tran,et al. Hierarchical Implicit Models and Likelihood-Free Variational Inference , 2017, NIPS.
[18] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[19] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Zoubin Ghahramani,et al. Training generative neural networks via Maximum Mean Discrepancy optimization , 2015, UAI.
[21] Arthur Gretton,et al. Demystifying MMD GANs , 2018, ICLR.
[22] Pablo M. Granitto,et al. Class-Splitting Generative Adversarial Networks , 2017, ArXiv.
[23] Sepp Hochreiter,et al. Coulomb GANs: Provably Optimal Nash Equilibria via Potential Fields , 2017, ICLR.
[24] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[25] J. Zico Kolter,et al. Gradient descent GAN optimization is locally stable , 2017, NIPS.
[26] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[27] Yinda Zhang,et al. LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop , 2015, ArXiv.
[28] Takeru Miyato,et al. cGANs with Projection Discriminator , 2018, ICLR.
[29] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[30] Kamalika Chaudhuri,et al. Approximation and Convergence Properties of Generative Adversarial Learning , 2017, NIPS.
[31] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[32] Francesco Visin,et al. A guide to convolution arithmetic for deep learning , 2016, ArXiv.
[33] Laurens van der Maaten,et al. Accelerating t-SNE using tree-based algorithms , 2014, J. Mach. Learn. Res..
[34] Richard S. Zemel,et al. Generative Moment Matching Networks , 2015, ICML.
[35] Kevin Scaman,et al. Lipschitz regularity of deep neural networks: analysis and efficient estimation , 2018, NeurIPS.
[36] Masashi Sugiyama,et al. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks , 2018, NeurIPS.
[37] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[38] Zhou Wang,et al. Multiscale structural similarity for image quality assessment , 2003, The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003.
[39] Ming-Hsuan Yang,et al. Semi-Supervised Learning for Optical Flow with Generative Adversarial Networks , 2017, NIPS.
[40] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[41] Martin J. Wainwright,et al. ON surrogate loss functions and f-divergences , 2005, math/0510521.
[42] Yiming Yang,et al. MMD GAN: Towards Deeper Understanding of Moment Matching Network , 2017, NIPS.
[43] Marc G. Genton,et al. Classes of Kernels for Machine Learning: A Statistics Perspective , 2002, J. Mach. Learn. Res..
[44] Philip M. Long,et al. The Singular Values of Convolutional Layers , 2018, ICLR.
[45] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.