Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
暂无分享,去创建一个
[1] Geoffrey E. Hinton. Using fast weights to deblur old memories , 1987 .
[2] Yoshua Bengio,et al. Learning a synaptic learning rule , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.
[3] Richard J. Mammone,et al. Meta-neural networks that learn by learning , 1992, [Proceedings 1992] IJCNN International Joint Conference on Neural Networks.
[4] Jürgen Schmidhuber,et al. Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks , 1992, Neural Computation.
[5] Christian Goerick,et al. Fast learning for problem classes using knowledge based network initialization , 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.
[6] Sepp Hochreiter,et al. Learning to Learn Using Gradient Descent , 2001, ICANN.
[7] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[8] Yoshua Bengio,et al. On the Optimization of a Synaptic Learning Rule , 2007 .
[9] G. Evans,et al. Learning to Optimize , 2008 .
[10] Joshua B. Tenenbaum,et al. One shot learning of simple visual concepts , 2011, CogSci.
[11] Yuval Tassa,et al. MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.
[12] Surya Ganguli,et al. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks , 2013, ICLR.
[13] Trevor Darrell,et al. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition , 2013, ICML.
[14] Marek Rei,et al. Online Representation Learning in Recurrent Neural Language Models , 2015, EMNLP.
[15] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[16] Sergey Levine,et al. Trust Region Policy Optimization , 2015, ICML.
[17] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[18] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[19] Ryan P. Adams,et al. Gradient-based Hyperparameter Optimization through Reversible Learning , 2015, ICML.
[20] Gregory R. Koch,et al. Siamese Neural Networks for One-Shot Image Recognition , 2015 .
[21] Daan Wierstra,et al. One-Shot Generalization in Deep Generative Models , 2016, ICML.
[22] Pieter Abbeel,et al. Benchmarking Deep Reinforcement Learning for Continuous Control , 2016, ICML.
[23] Ruslan Salakhutdinov,et al. Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning , 2015, ICLR.
[24] Jeff Clune,et al. Evolvability Search: Directly Selecting for Evolvability in order to Study and Produce It , 2016, GECCO.
[25] Daan Wierstra,et al. Meta-Learning with Memory-Augmented Neural Networks , 2016, ICML.
[26] Marcin Andrychowicz,et al. Learning to learn by gradient descent by gradient descent , 2016, NIPS.
[27] Tim Salimans,et al. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.
[28] Peter L. Bartlett,et al. RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning , 2016, ArXiv.
[29] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[30] Daan Wierstra,et al. One-shot Learning with Memory-Augmented Neural Networks , 2016, ArXiv.
[31] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[32] Trevor Darrell,et al. Data-dependent Initializations of Convolutional Neural Networks , 2015, ICLR.
[33] Geoffrey E. Hinton,et al. Using Fast Weights to Attend to the Recent Past , 2016, NIPS.
[34] Tapani Raiko,et al. International Conference on Learning Representations (ICLR) , 2016 .
[35] Zeb Kurth-Nelson,et al. Learning to reinforcement learn , 2016, CogSci.
[36] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[37] Amos J. Storkey,et al. Towards a Neural Statistician , 2016, ICLR.
[38] Hong Yu,et al. Meta Networks , 2017, ICML.
[39] Razvan Pascanu,et al. Overcoming catastrophic forgetting in neural networks , 2016, Proceedings of the National Academy of Sciences.
[40] C A Nelson,et al. Learning to Learn , 2017, Encyclopedia of Machine Learning and Data Mining.
[41] Aurko Roy,et al. Learning to Remember Rare Events , 2017, ICLR.
[42] Ambedkar Dukkipati,et al. Attentive Recurrent Comparators , 2017, ICML.
[43] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[44] Luc Van Gool,et al. One-Shot Video Object Segmentation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Hang Li,et al. Meta-SGD: Learning to Learn Quickly for Few Shot Learning , 2017, ArXiv.
[46] Joshua Achiam,et al. On First-Order Meta-Learning Algorithms , 2018, ArXiv.
[47] Joseph J. Lim,et al. Model-Agnostic Meta-Learning for Multimodal Task Distributions , 2018 .
[48] Amos J. Storkey,et al. How to train your MAML , 2018, ICLR.