暂无分享,去创建一个
Joshua Achiam | John Schulman | Alex Nichol | J. Schulman | Alex Nichol | Joshua Achiam | John Schulman
[1] Geoffrey E. Hinton. Using fast weights to deblur old memories , 1987 .
[2] Sepp Hochreiter,et al. Learning to Learn Using Gradient Descent , 2001, ICANN.
[3] Nikolaus Hansen,et al. The CMA Evolution Strategy: A Comparing Review , 2006, Towards a New Evolutionary Computation.
[4] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[5] Lauren A. Schmidt. Meaning and compositionality as statistical induction of categories and constraints , 2009 .
[6] Alexander J. Smola,et al. Parallelized Stochastic Gradient Descent , 2010, NIPS.
[7] Joshua B. Tenenbaum,et al. One shot learning of simple visual concepts , 2011, CogSci.
[8] Joshua B. Tenenbaum,et al. One-Shot Learning with a Hierarchical Nonparametric Bayesian Model , 2011, ICML Unsupervised and Transfer Learning.
[9] Trevor Darrell,et al. Part-Based R-CNNs for Fine-Grained Category Detection , 2014, ECCV.
[10] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[11] Joshua B. Tenenbaum,et al. Human-level concept learning through probabilistic program induction , 2015, Science.
[12] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[13] Daan Wierstra,et al. Meta-Learning with Memory-Augmented Neural Networks , 2016, ICML.
[14] Marcin Andrychowicz,et al. Learning to learn by gradient descent by gradient descent , 2016, NIPS.
[15] Tom Schaul,et al. Dueling Network Architectures for Deep Reinforcement Learning , 2015, ICML.
[16] Peter L. Bartlett,et al. RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning , 2016, ArXiv.
[17] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[18] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[19] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[20] Sergey Levine,et al. Meta-Learning and Universality: Deep Representations and Gradient Descent can Approximate any Learning Algorithm , 2017, ICLR.
[21] Thomas L. Griffiths,et al. Recasting Gradient-Based Meta-Learning as Hierarchical Bayes , 2018, ICLR.
[22] Thomas Paine,et al. Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions , 2017, ICLR.