暂无分享,去创建一个
Narayanan C. Krishnan | Aroof Aimen | Sahil Sidheekh | Vineet Madan | N. C. Krishnan | Aroof Aimen | Sahil Sidheekh | Vineet Madan
[1] L. Deng,et al. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web] , 2012, IEEE Signal Processing Magazine.
[2] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[3] Aurélien Lucchi,et al. A Domain Agnostic Measure for Monitoring and Evaluating GANs , 2019, NeurIPS.
[4] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[5] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[6] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[7] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[8] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[9] Ali Borji,et al. Pros and Cons of GAN Evaluation Measures , 2018, Comput. Vis. Image Underst..
[10] Ian J. Goodfellow,et al. Skill Rating for Generative Models , 2018, ArXiv.
[11] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[12] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[13] Bernhard Schölkopf,et al. AdaGAN: Boosting Generative Models , 2017, NIPS.
[14] S. Shankar Sastry,et al. On Finding Local Nash Equilibria (and Only Local Nash Equilibria) in Zero-Sum Games , 2019, 1901.00838.
[15] JainPrateek,et al. Non-convex Optimization for Machine Learning , 2017 .
[16] Olivier Bachem,et al. Assessing Generative Models via Precision and Recall , 2018, NeurIPS.
[17] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[18] Jaakko Lehtinen,et al. Improved Precision and Recall Metric for Assessing Generative Models , 2019, NeurIPS.
[19] Andrew M. Dai,et al. Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step , 2017, ICLR.
[20] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[21] Seong Joon Oh,et al. Reliable Fidelity and Diversity Metrics for Generative Models , 2020, ICML.
[22] Eric P. Xing,et al. AutoLoss: Learning Discrete Schedules for Alternate Optimization , 2018, ICLR 2018.
[23] Mario Lucic,et al. Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.
[24] Prateek Jain,et al. Efficient Algorithms for Smooth Minimax Optimization , 2019, NeurIPS.
[25] Michael I. Jordan,et al. How to Escape Saddle Points Efficiently , 2017, ICML.
[26] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[27] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[28] Yiming Yang,et al. MMD GAN: Towards Deeper Understanding of Moment Matching Network , 2017, NIPS.