暂无分享,去创建一个
Yee Whye Teh | Razvan Pascanu | Xu He | Alexandre Galashov | Andrei A. Rusu | Jakub Sygnowski | Y. Teh | Razvan Pascanu | Jakub Sygnowski | Alexandre Galashov | Xu He
[1] Sepp Hochreiter,et al. Learning to Learn Using Gradient Descent , 2001, ICANN.
[2] Soumith Chintala,et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.
[3] Elad Hoffer,et al. Bayesian Gradient Descent: Online Variational Bayes Learning with Increased Robustness to Catastrophic Forgetting and Weight Pruning , 2018, ArXiv.
[4] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[5] Marcin Andrychowicz,et al. Learning to learn by gradient descent by gradient descent , 2016, NIPS.
[6] Joshua B. Tenenbaum,et al. Human-level concept learning through probabilistic program induction , 2015, Science.
[7] Yee Whye Teh,et al. Conditional Neural Processes , 2018, ICML.
[8] Yoshua Bengio,et al. Mode Regularized Generative Adversarial Networks , 2016, ICLR.
[9] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[10] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[11] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Eric Eaton,et al. ELLA: An Efficient Lifelong Learning Algorithm , 2013, ICML.
[13] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[14] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[15] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[16] Truyen Tran,et al. Catastrophic forgetting and mode collapse in GANs , 2020, 2020 International Joint Conference on Neural Networks (IJCNN).
[17] Marcus Rohrbach,et al. Memory Aware Synapses: Learning what (not) to forget , 2017, ECCV.
[18] Stefan Wermter,et al. Continual Lifelong Learning with Neural Networks: A Review , 2019, Neural Networks.
[19] Anthony V. Robins,et al. Catastrophic Forgetting, Rehearsal and Pseudorehearsal , 1995, Connect. Sci..
[20] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[21] Yuan Qi,et al. Virtual Vector Machine for Bayesian Online Classification , 2009, UAI.
[22] Sergey Levine,et al. Online Meta-Learning , 2019, ICML.
[23] Joshua Achiam,et al. On First-Order Meta-Learning Algorithms , 2018, ArXiv.
[24] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[25] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[26] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[27] Yoshua Bengio,et al. An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks , 2013, ICLR.
[28] Svetlana Lazebnik,et al. PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[29] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[30] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[31] Botond Cseke,et al. Continual Learning with Bayesian Neural Networks for Non-Stationary Data , 2020, ICLR.
[32] Yoshua Bengio,et al. Online continual learning with no task boundaries , 2019, ArXiv.
[33] Leslie Pack Kaelbling,et al. Acting Optimally in Partially Observable Stochastic Domains , 1994, AAAI.
[34] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[35] Nicolas Y. Masse,et al. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization , 2018, Proceedings of the National Academy of Sciences.
[36] Gerald Tesauro,et al. Temporal Difference Learning and TD-Gammon , 1995, J. Int. Comput. Games Assoc..
[37] Wojciech M. Czarnecki,et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning , 2019, Nature.
[38] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[39] Sergey Levine,et al. Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL , 2018, ICLR.
[40] Joel Veness,et al. The Forget-me-not Process , 2016, NIPS.
[41] Michael McCloskey,et al. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .
[42] Andreas S. Tolias,et al. Three scenarios for continual learning , 2019, ArXiv.
[43] Katja Hofmann,et al. Fast Context Adaptation via Meta-Learning , 2018, ICML.
[44] Razvan Pascanu,et al. Meta-Learning with Latent Embedding Optimization , 2018, ICLR.
[45] David Barber,et al. Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting , 2018, NeurIPS.
[46] Gregory R. Koch,et al. Siamese Neural Networks for One-Shot Image Recognition , 2015 .
[47] Matthew E. Taylor,et al. A survey and critique of multiagent deep reinforcement learning , 2019, Autonomous Agents and Multi-Agent Systems.
[48] Daan Wierstra,et al. Meta-Learning with Memory-Augmented Neural Networks , 2016, ICML.
[49] Guoyin Wang,et al. Generative Adversarial Network Training is a Continual Learning Problem , 2018, ArXiv.
[50] Manfred Opper,et al. A Bayesian approach to on-line learning , 1999 .
[51] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[52] G. Monahan. State of the Art—A Survey of Partially Observable Markov Decision Processes: Theory, Models, and Algorithms , 1982 .
[53] Alexandros Karatzoglou,et al. Overcoming Catastrophic Forgetting with Hard Attention to the Task , 2018 .
[54] Derek Hoiem,et al. Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[55] Kyunghyun Cho,et al. Continual Learning via Neural Pruning , 2019, ArXiv.
[56] Xu He,et al. Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation , 2018, ICLR.
[57] Hal Daumé,et al. Learning Task Grouping and Overlap in Multi-task Learning , 2012, ICML.
[58] Yair Weiss,et al. On GANs and GMMs , 2018, NeurIPS.