UNIFY CONTINUAL LEARNING RESEARCH
暂无分享,去创建一个
Pau Rodríguez López | M. Riemer | Laurent Charlin | Khimya Khetarpal | I. Rish | Florian Golemo | Timothée Lesort | O. Ostapenko | Ryan Lindeborg | Lucas Cecchi | David Vázquez | Massimo Caccia | Massimo Caccia
[1] Razvan Pascanu,et al. Continual World: A Robotic Benchmark For Continual Reinforcement Learning , 2021, NeurIPS.
[2] Simone Calderara,et al. Avalanche: an End-to-End Library for Continual Learning , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[3] Arthur Douillard,et al. Continuum: Simple Management of Complex Continual Learning Scenarios , 2021, ArXiv.
[4] Shirin Enshaeifar,et al. Continual Learning Using Bayesian Neural Networks , 2019, IEEE Transactions on Neural Networks and Learning Systems.
[5] Tinne Tuytelaars,et al. A Continual Learning Survey: Defying Forgetting in Classification Tasks , 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] Doina Precup,et al. Towards Continual Reinforcement Learning: A Review and Perspectives , 2020, ArXiv.
[7] Alexandre Drouin,et al. Synbols: Probing Learning Algorithms with Synthetic Datasets , 2020, NeurIPS.
[8] Philip H. S. Torr,et al. GDumb: A Simple Approach that Questions Our Progress in Continual Learning , 2020, ECCV.
[9] Eric Eaton,et al. Lifelong Policy Gradient Learning of Factored Policies for Faster Training Without Forgetting , 2020, NeurIPS.
[10] Chelsea Finn,et al. Deep Reinforcement Learning amidst Lifelong Non-Stationarity , 2020, ArXiv.
[11] Sridhar Mahadevan,et al. Optimizing for the Future in Non-Stationary MDPs , 2020, ICML.
[12] Murray Shanahan,et al. Continual Reinforcement Learning with Multi-Timescale Replay , 2020, ArXiv.
[13] David Vázquez,et al. Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning , 2020, NeurIPS.
[14] Junsoo Ha,et al. A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning , 2020, ICLR.
[15] Marco Pavone,et al. Continuous Meta-Learning without Tasks , 2019, NeurIPS.
[16] Natalia Díaz Rodríguez,et al. Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges , 2019, Inf. Fusion.
[17] Philip S. Thomas,et al. Lifelong Learning with a Changing Action Set , 2019, AAAI.
[18] S. Levine,et al. Meta-World: A Benchmark and Evaluation for Multi-Task and Meta Reinforcement Learning , 2019, CoRL.
[19] Tengyu Ma,et al. A Model-based Approach for Sample-efficient Multi-task Reinforcement Learning , 2019, ArXiv.
[20] David Filliat,et al. DisCoRL: Continual Reinforcement Learning via Policy Distillation , 2019, ArXiv.
[21] Yee Whye Teh,et al. Task Agnostic Continual Learning via Meta Learning , 2019, ArXiv.
[22] Andreas S. Tolias,et al. Three scenarios for continual learning , 2019, ArXiv.
[23] Yoshua Bengio,et al. Gradient based sample selection for online continual learning , 2019, NeurIPS.
[24] David Filliat,et al. Generative Models from the perspective of Continual Learning , 2018, 2019 International Joint Conference on Neural Networks (IJCNN).
[25] Tinne Tuytelaars,et al. Task-Free Continual Learning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] David Rolnick,et al. Experience Replay for Continual Learning , 2018, NeurIPS.
[27] David Filliat,et al. Marginal Replay vs Conditional Replay for Continual Learning , 2018, ICANN.
[28] Marc'Aurelio Ranzato,et al. Efficient Lifelong Learning with A-GEM , 2018, ICLR.
[29] Gerald Tesauro,et al. Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference , 2018, ICLR.
[30] Stefan Wermter,et al. Continual Lifelong Learning with Neural Networks: A Review , 2018, Neural Networks.
[31] Marlos C. Machado,et al. Generalization and Regularization in DQN , 2018, ArXiv.
[32] Tom Schaul,et al. Transfer in Deep Reinforcement Learning Using Successor Features and Generalised Policy Improvement , 2018, ICML.
[33] Yarin Gal,et al. Towards Robust Evaluations of Continual Learning , 2018, ArXiv.
[34] Herke van Hoof,et al. Addressing Function Approximation Error in Actor-Critic Methods , 2018, ICML.
[35] Sergey Levine,et al. Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor , 2018, ICML.
[36] Alexandros Karatzoglou,et al. Overcoming Catastrophic Forgetting with Hard Attention to the Task , 2018 .
[37] Marcus Rohrbach,et al. Memory Aware Synapses: Learning what (not) to forget , 2017, ECCV.
[38] Svetlana Lazebnik,et al. PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[39] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[40] Philip Bachman,et al. Deep Reinforcement Learning that Matters , 2017, AAAI.
[41] Alec Radford,et al. Proximal Policy Optimization Algorithms , 2017, ArXiv.
[42] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[43] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[44] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[45] Chrisantha Fernando,et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.
[46] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[47] Christoph H. Lampert,et al. iCaRL: Incremental Classifier and Representation Learning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[48] Razvan Pascanu,et al. Progressive Neural Networks , 2016, ArXiv.
[49] Alex Graves,et al. Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.
[50] Ruslan Salakhutdinov,et al. Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning , 2015, ICLR.
[51] Yuval Tassa,et al. Continuous control with deep reinforcement learning , 2015, ICLR.
[52] Massimiliano Pontil,et al. The Benefit of Multitask Representation Learning , 2015, J. Mach. Learn. Res..
[53] Shane Legg,et al. Human-level control through deep reinforcement learning , 2015, Nature.
[54] Daniele Calandriello,et al. Sparse multi-task reinforcement learning , 2014, Intelligenza Artificiale.
[55] Eric Eaton,et al. Online Multi-Task Learning for Policy Gradient Methods , 2014, ICML.
[56] Peter Stone,et al. Transfer Learning for Reinforcement Learning Domains: A Survey , 2009, J. Mach. Learn. Res..
[57] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[58] Mark B. Ring. CHILD: A First Step Towards Continual Learning , 1997, Machine Learning.
[59] Dit-Yan Yeung,et al. Hidden-Mode Markov Decision Processes for Nonstationary Sequential Decision Making , 2001, Sequence Learning.
[60] R. French. Catastrophic forgetting in connectionist networks , 1999, Trends in Cognitive Sciences.
[61] Sebastian Thrun,et al. Lifelong robot learning , 1993, Robotics Auton. Syst..
[62] Sebastian Thrun,et al. Finding Structure in Reinforcement Learning , 1994, NIPS.