暂无分享,去创建一个
Richard E. Turner | Siddharth Swaroop | Noel Loo | Siddharth Swaroop | Noel Loo | N. Loo | S. Swaroop
[1] James Martens,et al. New Insights and Perspectives on the Natural Gradient Method , 2014, J. Mach. Learn. Res..
[2] Sebastian Nowozin,et al. How Good is the Bayes Posterior in Deep Neural Networks Really? , 2020, ICML.
[3] Aaron C. Courville,et al. FiLM: Visual Reasoning with a General Conditioning Layer , 2017, AAAI.
[4] R. French. Catastrophic forgetting in connectionist networks , 1999, Trends in Cognitive Sciences.
[5] Richard E. Turner,et al. Overpruning in Variational Bayesian Neural Networks , 2018, 1801.06230.
[6] Manfred Opper,et al. The Variational Gaussian Approximation Revisited , 2009, Neural Computation.
[7] Taesup Moon,et al. Uncertainty-based Continual Learning with Adaptive Regularization , 2019, NeurIPS.
[8] David Barber,et al. Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting , 2018, NeurIPS.
[9] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[10] Alexander A. Alemi,et al. Fixing a Broken ELBO , 2017, ICML.
[11] Razvan Pascanu,et al. Progressive Neural Networks , 2016, ArXiv.
[12] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[13] Marc'Aurelio Ranzato,et al. Gradient Episodic Memory for Continual Learning , 2017, NIPS.
[14] Yee Whye Teh,et al. Progress & Compress: A scalable framework for continual learning , 2018, ICML.
[15] Richard E. Turner,et al. Two problems with variational expectation maximisation for time-series models , 2011 .
[16] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[17] Mohammad Emtiyaz Khan,et al. Continual Deep Learning by Functional Regularisation of Memorable Past , 2020, NeurIPS.
[18] Byoung-Tak Zhang,et al. Overcoming Catastrophic Forgetting by Incremental Moment Matching , 2017, NIPS.
[19] Guodong Zhang,et al. Noisy Natural Gradient as Variational Inference , 2017, ICML.
[20] Sebastian Nowozin,et al. Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes , 2019, NeurIPS.
[21] Mohammad Emtiyaz Khan,et al. Practical Deep Learning with Bayesian Principles , 2019, NeurIPS.
[22] Alexander J. Smola,et al. Laplace Propagation , 2003, NIPS.
[23] Chrisantha Fernando,et al. PathNet: Evolution Channels Gradient Descent in Super Neural Networks , 2017, ArXiv.
[24] Mehrdad Farajtabar,et al. SOLA: Continual Learning with Second-Order Loss Approximation , 2020, ArXiv.
[25] Richard E. Turner,et al. Improving and Understanding Variational Continual Learning , 2019, ArXiv.
[26] Surya Ganguli,et al. Continual Learning Through Synaptic Intelligence , 2017, ICML.
[27] Richard E. Turner,et al. Continual Learning with Adaptive Weights (CLAW) , 2020, ICLR.
[28] Richard E. Turner,et al. Variational Continual Learning , 2017, ICLR.
[29] Andrea Vedaldi,et al. Learning multiple visual domains with residual adapters , 2017, NIPS.
[30] Stefano Soatto,et al. Where is the Information in a Deep Neural Network? , 2019, ArXiv.
[31] Alexandros Karatzoglou,et al. Overcoming Catastrophic Forgetting with Hard Attention to the Task , 2018 .