Embracing Change: Continual Learning in Deep Neural Networks
暂无分享,去创建一个
[1] Albert Gordo,et al. Using Hindsight to Anchor Past Knowledge in Continual Learning , 2019, AAAI.
[2] Elad Hoffer,et al. Task Agnostic Continual Learning Using Online Variational Bayes , 2018, 1803.10123.
[3] Seyed Iman Mirzadeh,et al. Understanding the Role of Training Regimes in Continual Learning , 2020, NeurIPS.
[4] Hassan Ghasemzadeh,et al. Dropout as an Implicit Gating Mechanism For Continual Learning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[5] Min Lin,et al. Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning , 2020, ArXiv.
[6] Joel Lehman,et al. Learning to Continually Learn , 2020, ECAI.
[7] S. Levine,et al. Gradient Surgery for Multi-Task Learning , 2020, NeurIPS.
[8] Richard E. Turner,et al. Continual Learning with Adaptive Weights (CLAW) , 2019, ICLR.
[9] Andrei A. Rusu,et al. Meta-Learning with Warped Gradient Descent , 2019, ICLR.
[10] Hugo Larochelle,et al. Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples , 2019, ICLR.
[11] Yee Whye Teh,et al. Functional Regularisation for Continual Learning using Gaussian Processes , 2019, ICLR.
[12] Fahad Shahbaz Khan,et al. Random Path Selection for Continual Learning , 2019, NeurIPS.
[13] Patrick H. Chen,et al. Overcoming Catastrophic Forgetting by Generative Regularization , 2019, ArXiv.
[14] S. Levine,et al. Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning , 2019, CoRL.
[15] Yee Whye Teh,et al. Continual Unsupervised Representation Learning , 2019, NeurIPS.
[16] Tinne Tuytelaars,et al. Online Continual Learning with Maximally Interfered Retrieval , 2019, ArXiv.
[17] Timothy E. J. Behrens,et al. Human Replay Spontaneously Reorganizes Experience , 2019, Cell.
[18] Yee Whye Teh,et al. Task Agnostic Continual Learning via Meta Learning , 2019, ArXiv.
[19] Sebastian Ruder,et al. Episodic Memory in Lifelong Language Learning , 2019, NeurIPS.
[20] Martha White,et al. Meta-Learning Representations for Continual Learning , 2019, NeurIPS.
[21] Ying Wei,et al. Hierarchically Structured Meta-learning , 2019, ICML.
[22] Subhransu Maji,et al. Meta-Learning With Differentiable Convex Optimization , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[23] Yoshua Bengio,et al. Gradient based sample selection for online continual learning , 2019, NeurIPS.
[24] Sergey Levine,et al. Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables , 2019, ICML.
[25] Yee Whye Teh,et al. Exploiting Hierarchy for Learning and Transfer in KL-regularized RL , 2019, ArXiv.
[26] Kyunghyun Cho,et al. Continual Learning via Neural Pruning , 2019, ArXiv.
[27] Marc'Aurelio Ranzato,et al. Continual Learning with Tiny Episodic Memories , 2019, ArXiv.
[28] Sergey Levine,et al. Online Meta-Learning , 2019, ICML.
[29] Xin Wang,et al. Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization , 2019, ICML.
[30] Sergey Levine,et al. Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL , 2018, ICLR.
[31] Tinne Tuytelaars,et al. Task-Free Continual Learning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] David Rolnick,et al. Experience Replay for Continual Learning , 2018, NeurIPS.
[33] Martha White,et al. The Utility of Sparse Representations for Control in Reinforcement Learning , 2018, AAAI.
[34] Katja Hofmann,et al. Fast Context Adaptation via Meta-Learning , 2018, ICML.
[35] Marc'Aurelio Ranzato,et al. Efficient Lifelong Learning with A-GEM , 2018, ICLR.
[36] Razvan Pascanu,et al. Meta-Learning with Latent Embedding Optimization , 2018, ICLR.
[37] Marcus Rohrbach,et al. Selfless Sequential Learning , 2018, ICLR.
[38] Razvan Pascanu,et al. Adapting Auxiliary Losses Using Gradient Similarity , 2018, ArXiv.
[39] Tom Eccles,et al. Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies , 2018, NeurIPS.
[40] Yee Whye Teh,et al. Conditional Neural Processes , 2018, ICML.
[41] Sergey Levine,et al. Probabilistic Model-Agnostic Meta-Learning , 2018, NeurIPS.
[42] Yarin Gal,et al. Towards Robust Evaluations of Continual Learning , 2018, ArXiv.
[43] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[44] Zhanxing Zhu,et al. Reinforced Continual Learning , 2018, NeurIPS.
[45] Sergey Levine,et al. Latent Space Policies for Hierarchical Reinforcement Learning , 2018, ICML.
[46] Erich Elsen,et al. Efficient Neural Audio Synthesis , 2018, ICML.
[47] Murray Shanahan,et al. Continual Reinforcement Learning with Complex Synapses , 2018, ICML.
[48] Razvan Pascanu,et al. Memory-based Parameter Adaptation , 2018, ICLR.
[49] Alexandros Karatzoglou,et al. Overcoming Catastrophic Forgetting with Hard Attention to the Task , 2018 .
[50] Max Welling,et al. Learning Sparse Neural Networks through L0 Regularization , 2017, ICLR.
[51] Marcus Rohrbach,et al. Memory Aware Synapses: Learning what (not) to forget , 2017, ECCV.
[52] David Kappel,et al. Deep Rewiring: Training very sparse deep networks , 2017, ICLR.
[53] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[54] Sung Ju Hwang,et al. Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.
[55] Peter Stone,et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science , 2017, Nature Communications.
[56] Aleksander Madry,et al. Towards Deep Learning Models Resistant to Adversarial Attacks , 2017, ICLR.
[57] Derek Hoiem,et al. Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[58] D. Hassabis,et al. Neuroscience-Inspired Artificial Intelligence , 2017, Neuron.
[59] Xu He,et al. Overcoming Catastrophic Interference by Conceptors , 2017, ArXiv.
[60] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[61] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[62] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[63] Alex Graves,et al. Automated Curriculum Learning for Neural Networks , 2017, ICML.
[64] Matthew B. Blaschko,et al. Encoder Based Lifelong Learning , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[65] Andrew McCallum,et al. Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples , 2017, NIPS.
[66] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[67] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[68] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[69] Razvan Pascanu,et al. Discovering objects and their relations from entangled scene representations , 2017, ICLR.
[70] Chrisantha Fernando,et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.
[71] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[72] Conrad D. James,et al. Neurogenesis deep learning: Extending deep networks to accommodate new classes , 2016, 2017 International Joint Conference on Neural Networks (IJCNN).
[73] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[74] Christoph H. Lampert,et al. iCaRL: Incremental Classifier and Representation Learning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[75] Zeb Kurth-Nelson,et al. Learning to reinforcement learn , 2016, CogSci.
[76] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[77] C A Nelson,et al. Learning to Learn , 2017, Encyclopedia of Machine Learning and Data Mining.
[78] Sergio Gomez Colmenarejo,et al. Hybrid computing using a neural network with dynamic external memory , 2016, Nature.
[79] Razvan Pascanu,et al. Progressive Neural Networks , 2016, ArXiv.
[80] Marcin Andrychowicz,et al. Learning to learn by gradient descent by gradient descent , 2016, NIPS.
[81] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[82] Alexander V. Terekhov,et al. Knowledge Transfer in Deep Block-Modular Neural Networks , 2015, Living Machines.
[83] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[84] Shai Shalev-Shwartz,et al. SelfieBoost: A Boosting Algorithm for Deep Learning , 2014, ArXiv.
[85] Yoshua Bengio,et al. An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks , 2013, ICLR.
[86] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[87] Bogdan Gabrys,et al. Metalearning: a survey of trends and technologies , 2013, Artificial Intelligence Review.
[88] Edward T. Bullmore,et al. Modular and Hierarchically Modular Organization of Brain Networks , 2010, Front. Neurosci..
[89] D. Sherry,et al. Seasonal hippocampal plasticity in food-storing birds , 2010, Philosophical Transactions of the Royal Society B: Biological Sciences.
[90] Jason Weston,et al. Curriculum learning , 2009, ICML '09.
[91] Wulfram Gerstner,et al. Tag-Trigger-Consolidation: A Model of Early and Late Long-Term-Potentiation and Depression , 2008, PLoS Comput. Biol..
[92] J. Wixted. The psychology and neuroscience of forgetting. , 2004, Annual review of psychology.
[93] Mark B. Ring. CHILD: A First Step Towards Continual Learning , 1997, Machine Learning.
[94] Sepp Hochreiter,et al. Learning to Learn Using Gradient Descent , 2001, ICANN.
[95] Anthony V. Robins,et al. Catastrophic Forgetting, Rehearsal and Pseudorehearsal , 1995, Connect. Sci..
[96] E. Tulving,et al. Episodic and semantic memory , 1972 .