暂无分享,去创建一个
[1] Oriol Vinyals,et al. Qualitatively characterizing neural network optimization problems , 2014, ICLR.
[2] S. Risi,et al. Continual Learning through Evolvable Neural Turing Machines , 2016 .
[3] Nikos Komodakis,et al. Dynamic Few-Shot Visual Learning Without Forgetting , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[4] Gerald Tesauro,et al. Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference , 2018, ICLR.
[5] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[6] Marc'Aurelio Ranzato,et al. Efficient Lifelong Learning with A-GEM , 2018, ICLR.
[7] Shankar Krishnan,et al. An Investigation into Neural Net Optimization via Hessian Eigenvalue Density , 2019, ICML.
[8] Hassan Ghasemzadeh,et al. Dropout as an Implicit Gating Mechanism For Continual Learning , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[9] Yoshua Bengio,et al. An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks , 2013, ICLR.
[10] Yoshua Bengio,et al. Gradient based sample selection for online continual learning , 2019, NeurIPS.
[11] Thomas L. Griffiths,et al. Reconciling meta-learning and continual learning with online mixtures of tasks , 2018, NeurIPS.
[12] Jiashi Feng,et al. Variational Prototype Replays for Continual Learning , 2019 .
[13] Yoram Singer,et al. Shampoo: Preconditioned Stochastic Tensor Optimization , 2018, ICML.
[14] Yurii Nesterov,et al. Cubic regularization of Newton method and its global performance , 2006, Math. Program..
[15] Tinne Tuytelaars,et al. A Continual Learning Survey: Defying Forgetting in Classification Tasks , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[16] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[17] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[18] Ronald Kemker,et al. Measuring Catastrophic Forgetting in Neural Networks , 2017, AAAI.
[19] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[20] Michael McCloskey,et al. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .
[21] Yann LeCun,et al. The Loss Surfaces of Multilayer Networks , 2014, AISTATS.
[22] Yoshua Bengio,et al. An Empirical Study of Example Forgetting during Deep Neural Network Learning , 2018, ICLR.
[23] Yuan Yu,et al. TensorFlow: A system for large-scale machine learning , 2016, OSDI.
[24] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[25] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[26] David Barber,et al. Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting , 2018, NeurIPS.
[27] Stefan Wermter,et al. Continual Lifelong Learning with Neural Networks: A Review , 2019, Neural Networks.
[28] Laurent Itti,et al. Closed-Loop Memory GAN for Continual Learning , 2018, IJCAI.
[29] Yee Whye Teh,et al. Continual Unsupervised Representation Learning , 2019, NeurIPS.
[30] Joel Lehman,et al. Learning to Continually Learn , 2020, ECAI.
[31] Yuxin Peng,et al. Error-Driven Incremental Learning in Deep Convolutional Neural Network for Large-Scale Image Classification , 2014, ACM Multimedia.
[32] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[33] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[34] Yee Whye Teh,et al. Functional Regularisation for Continual Learning using Gaussian Processes , 2019, ICLR.
[35] Yarin Gal,et al. A Unifying Bayesian View of Continual Learning , 2019, ArXiv.
[36] Yan Liu,et al. Deep Generative Dual Memory Network for Continual Learning , 2017, ArXiv.
[37] Richard Socher,et al. Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting , 2019, ICML.
[38] Sébastien Bubeck,et al. Convex Optimization: Algorithms and Complexity , 2014, Found. Trends Mach. Learn..
[39] Ameet Talwalkar,et al. Foundations of Machine Learning , 2012, Adaptive computation and machine learning.
[40] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[41] Sung Ju Hwang,et al. Lifelong Learning with Dynamically Expandable Networks , 2017, ICLR.
[42] Yanshuai Cao,et al. Few-Shot Self Reminder to Overcome Catastrophic Forgetting , 2018, ArXiv.
[43] Mehrdad Farajtabar,et al. Orthogonal Gradient Descent for Continual Learning , 2019, AISTATS.
[44] Laurent Itti,et al. Closed-Loop GAN for continual Learning , 2018, IJCAI.
[45] Yoshua Bengio,et al. Online continual learning with no task boundaries , 2019, ArXiv.
[46] Stefano Soatto,et al. Toward Understanding Catastrophic Forgetting in Continual Learning , 2019, ArXiv.
[47] Michael I. Jordan,et al. How to Escape Saddle Points Efficiently , 2017, ICML.
[48] Yarin Gal,et al. Towards Robust Evaluations of Continual Learning , 2018, ArXiv.
[49] Bohyung Han,et al. Continual Learning by Asymmetric Loss Approximation With Single-Side Overestimation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).