How do Quadratic Regularizers Prevent Catastrophic Forgetting: The Role of Interpolation
暂无分享,去创建一个
[1] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[2] Wei Hu,et al. Algorithmic Regularization in Learning Deep Homogeneous Models: Layers are Automatically Balanced , 2018, NeurIPS.
[3] Seyed Iman Mirzadeh,et al. Linear Mode Connectivity in Multitask and Continual Learning , 2020, ICLR.
[4] Nir Levine,et al. Optimization and Generalization of Regularization-Based Continual Learning: a Loss Approximation Viewpoint. , 2020 .
[5] Daniel L. K. Yamins,et al. Pruning neural networks without any data by iteratively conserving synaptic flow , 2020, NeurIPS.
[6] Mehrdad Farajtabar,et al. Orthogonal Gradient Descent for Continual Learning , 2019, AISTATS.
[7] Philip H. S. Torr,et al. Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence , 2018, ECCV.
[8] Richard Socher,et al. Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting , 2019, ICML.
[9] Yoshua Bengio,et al. Gradient based sample selection for online continual learning , 2019, NeurIPS.
[10] Marc'Aurelio Ranzato,et al. Efficient Lifelong Learning with A-GEM , 2018, ICLR.
[11] Yarin Gal,et al. Towards Robust Evaluations of Continual Learning , 2018, ArXiv.
[12] Jose Javier Gonzalez Ortiz,et al. What is the State of Neural Network Pruning? , 2020, MLSys.
[13] David Barber,et al. Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting , 2018, NeurIPS.
[14] Sung Ju Hwang,et al. Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.
[15] Gintare Karolina Dziugaite,et al. Linear Mode Connectivity and the Lottery Ticket Hypothesis , 2019, ICML.
[16] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[17] Razvan Pascanu,et al. Progressive Neural Networks , 2016, ArXiv.
[18] Surya Ganguli,et al. Neural Mechanics: Symmetry and Broken Conservation Laws in Deep Learning Dynamics , 2021, ICLR.
[19] Yee Whye Teh,et al. Functional Regularisation for Continual Learning using Gaussian Processes , 2019, ICLR.
[20] Kaushik Roy,et al. Gradient Projection Memory for Continual Learning , 2021, ICLR.
[21] Marc'Aurelio Ranzato,et al. On Tiny Episodic Memories in Continual Learning , 2019 .
[22] Geoffrey E. Hinton,et al. Similarity of Neural Network Representations Revisited , 2019, ICML.
[23] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[24] Ethan Dyer,et al. Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics , 2020, ICLR.
[25] Derek Hoiem,et al. Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[26] Xuhong Li,et al. Explicit Inductive Bias for Transfer Learning with Convolutional Networks , 2018, ICML.
[27] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[28] Gerald Tesauro,et al. Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference , 2018, ICLR.
[29] Marcus Rohrbach,et al. Memory Aware Synapses: Learning what (not) to forget , 2017, ECCV.