Generative Models from the perspective of Continual Learning
暂无分享,去创建一个
David Filliat | Andrei Stoian | Michaël Garcia Ortiz | Timothée Lesort | Hugo Caselles-Dupré | David Filliat | A. Stoian | Hugo Caselles-Dupré | Timothée Lesort | M. G. Ortiz
[1] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[2] Bogdan Raducanu,et al. Memory Replay GANs: learning to generate images from new categories without forgetting , 2018, NeurIPS.
[3] Baoxin Li,et al. A Strategy for an Uncompromising Incremental Learner , 2017, ArXiv.
[4] Rishi Sharma,et al. A Note on the Inception Score , 2018, ArXiv.
[5] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[6] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[7] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[8] Christoph H. Lampert,et al. iCaRL: Incremental Classifier and Representation Learning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[10] Ali Borji,et al. Pros and Cons of GAN Evaluation Measures , 2018, Comput. Vis. Image Underst..
[11] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[12] Xu He,et al. Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation , 2018, ICLR.
[13] Faisal Shafait,et al. Distillation Techniques for Pseudo-rehearsal Based Incremental Learning , 2018, ArXiv.
[14] R. French. Catastrophic forgetting in connectionist networks , 1999, Trends in Cognitive Sciences.
[15] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[16] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[17] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[18] Razvan Pascanu,et al. Progressive Neural Networks , 2016, ArXiv.
[19] Derek Hoiem,et al. Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[20] Lawrence Carin,et al. ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching , 2017, NIPS.
[21] Yandong Guo,et al. Incremental Classifier Learning with Generative Adversarial Networks , 2018, ArXiv.
[22] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[23] Alexandros Kalousis,et al. Lifelong Generative Modeling , 2017, Neurocomputing.
[24] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[25] Jürgen Schmidhuber,et al. Compete to Compute , 2013, NIPS.
[26] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[27] Chrisantha Fernando,et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.
[28] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[29] Tom Eccles,et al. Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies , 2018, NeurIPS.
[30] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[31] Han Liu,et al. Continual Learning in Generative Adversarial Nets , 2017, ArXiv.
[32] J. Fagot,et al. Evidence for large long-term memory capacities in baboons and pigeons and its implications for learning and the evolution of cognition , 2006, Proceedings of the National Academy of Sciences.
[33] David Filliat,et al. Training Discriminative Models to Evaluate Generative Ones , 2019, ICANN.