Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors
暂无分享,去创建一个
[1] Leonidas Guibas,et al. Side-Tuning: Network Adaptation via Additive Side Networks , 2019, ArXiv.
[2] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[3] Ludovic Denoyer,et al. Efficient Continual Learning with Modular Networks and Task-Driven Priors , 2020, ArXiv.
[4] Andreas S. Tolias,et al. Generative replay with feedback connections as a general strategy for continual learning , 2018, ArXiv.
[5] David Rolnick,et al. Experience Replay for Continual Learning , 2018, NeurIPS.
[6] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[7] Derek Hoiem,et al. Learning without Forgetting , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[8] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[9] Junsoo Ha,et al. A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning , 2020, ICLR.
[10] Marcus Rohrbach,et al. Memory Aware Synapses: Learning what (not) to forget , 2017, ECCV.
[11] Yarin Gal,et al. Understanding Measures of Uncertainty for Adversarial Example Detection , 2018, UAI.
[12] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[13] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[14] Yee Whye Teh,et al. Do Deep Generative Models Know What They Don't Know? , 2018, ICLR.
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Yi-Ming Chan,et al. Compacting, Picking and Growing for Unforgetting Continual Learning , 2019, NeurIPS.
[17] Kerstin Vogler,et al. Table Of Integrals Series And Products , 2016 .
[18] Yarin Gal,et al. Towards Robust Evaluations of Continual Learning , 2018, ArXiv.
[19] István Csabai,et al. Detecting and classifying lesions in mammograms with Deep Learning , 2017, Scientific Reports.
[20] Yoshua Bengio,et al. An Empirical Investigation of Catastrophic Forgeting in Gradient-Based Neural Networks , 2013, ICLR.
[21] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[22] Stefan Wermter,et al. Continual Lifelong Learning with Neural Networks: A Review , 2019, Neural Networks.
[23] Zhanxing Zhu,et al. Reinforced Continual Learning , 2018, NeurIPS.
[24] Yoram Singer,et al. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..
[25] Stefan Zohren,et al. Hierarchical Indian buffet neural networks for Bayesian continual learning , 2019, UAI.
[26] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[27] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[28] Andreas S. Tolias,et al. Three scenarios for continual learning , 2019, ArXiv.
[29] Abhishek Kumar,et al. Nonparametric Bayesian Structure Adaptation for Continual Learning , 2019, ArXiv.
[30] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[31] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[32] Yen-Cheng Liu,et al. Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines , 2018, ArXiv.
[33] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[34] Marco Cote. STICK-BREAKING VARIATIONAL AUTOENCODERS , 2017 .
[35] David Barber,et al. Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting , 2018, NeurIPS.
[36] Lawrence Carin,et al. Automatic threat recognition of prohibited items at aviation checkpoint with x-ray imaging: a deep learning approach , 2018, Defense + Security.
[37] Michael McCloskey,et al. Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem , 1989 .
[38] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[39] Guoyin Wang,et al. Generative Adversarial Network Training is a Continual Learning Problem , 2018, ArXiv.
[40] J. V. Michalowicz,et al. Handbook of Differential Entropy , 2013 .
[41] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[42] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[43] Tinne Tuytelaars,et al. Task-Free Continual Learning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Yee Whye Teh,et al. Stick-breaking Construction for the Indian Buffet Process , 2007, AISTATS.
[45] Yarin Gal,et al. Uncertainty in Deep Learning , 2016 .
[46] Yarin Gal,et al. A Unifying Bayesian View of Continual Learning , 2019, ArXiv.
[47] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[48] Jiwon Kim,et al. Continual Learning with Deep Generative Replay , 2017, NIPS.
[49] P. Kumaraswamy. A generalized probability density function for double-bounded random processes , 1980 .
[50] R Ratcliff,et al. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. , 1990, Psychological review.
[51] Kibok Lee,et al. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , 2018, NeurIPS.
[52] Thomas L. Griffiths,et al. Infinite latent feature models and the Indian buffet process , 2005, NIPS.