Metalearned Neural Memory
暂无分享,去创建一个
Tsendsuren Munkhdalai | Adam Trischler | Alessandro Sordoni | Tong Wang | Adam Trischler | Tsendsuren Munkhdalai | Alessandro Sordoni | Tong Wang | A. Trischler
[1] Pentti Kanerva,et al. Sparse Distributed Memory , 1988 .
[2] Hong Yu,et al. Meta Networks , 2017, ICML.
[3] Wen Sun,et al. Contextual Memory Trees , 2018 .
[4] Jason Weston,et al. End-To-End Memory Networks , 2015, NIPS.
[5] Alex Graves,et al. Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes , 2016, NIPS.
[6] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[7] Zeb Kurth-Nelson,et al. Learning to reinforcement learn , 2016, CogSci.
[8] Jason Weston,et al. Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks , 2015, ICLR.
[9] Alex Graves,et al. Decoupled Neural Interfaces using Synthetic Gradients , 2016, ICML.
[10] Richard Socher,et al. Ask Me Anything: Dynamic Memory Networks for Natural Language Processing , 2015, ICML.
[11] Adam Trischler,et al. A Computational Model for Episodic Memory Inspired by the Brain , 2016 .
[12] Jonathon Shlens,et al. A Learned Representation For Artistic Style , 2016, ICLR.
[13] F ROSENBLATT,et al. The perceptron: a probabilistic model for information storage and organization in the brain. , 1958, Psychological review.
[14] Arild Nøkland,et al. Direct Feedback Alignment Provides Learning in Deep Neural Networks , 2016, NIPS.
[15] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[16] Tsendsuren Munkhdalai,et al. Rapid Adaptation with Conditionally Shifted Neurons , 2017, ICML.
[17] J. Urgen Schmidhuber. Learning to Control Fast-weight Memories: an Alternative to Dynamic Recurrent Networks , 1991 .
[18] Colin J. Akerman,et al. Random synaptic feedback weights support error backpropagation for deep learning , 2016, Nature Communications.
[19] Babak Hassibi,et al. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.
[20] Jason Weston,et al. Memory Networks , 2014, ICLR.
[21] Alex Graves,et al. Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.
[22] Geoffrey E. Hinton,et al. Using Fast Weights to Attend to the Recent Past , 2016, NIPS.
[23] Peter L. Bartlett,et al. RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning , 2016, ArXiv.
[24] J. Knott. The organization of behavior: A neuropsychological theory , 1951 .
[25] Tsendsuren Munkhdalai,et al. Metalearning with Hebbian Fast Weights , 2018, ArXiv.
[26] Alex Graves,et al. Neural Turing Machines , 2014, ArXiv.
[27] Sergey Levine,et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.
[28] Pieter Abbeel,et al. A Simple Neural Attentive Meta-Learner , 2017, ICLR.
[29] Sepp Hochreiter,et al. Learning to Learn Using Gradient Descent , 2001, ICANN.
[30] Yan Wu,et al. Learning Attractor Dynamics for Generative Memory , 2018, NeurIPS.
[31] J. Schmidhuber. Reducing the Ratio Between Learning Complexity and Number of Time Varying Variables in Fully Recurrent Nets , 1993 .
[32] Hugo Larochelle,et al. Optimization as a Model for Few-Shot Learning , 2016, ICLR.
[33] Sanja Fidler,et al. Predicting Deep Zero-Shot Convolutional Neural Networks Using Textual Descriptions , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[34] Kenneth O. Stanley,et al. Differentiable plasticity: training plastic neural networks with backpropagation , 2018, ICML.
[35] Geoffrey E. Hinton. Using fast weights to deblur old memories , 1987 .
[36] Richard S. Zemel,et al. A Generative Model for Attractor Dynamics , 1999, NIPS.
[37] Sergey Levine,et al. One-Shot Visual Imitation Learning via Meta-Learning , 2017, CoRL.
[38] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[39] Vijay Kumar,et al. Memory Augmented Control Networks , 2017, ICLR.
[40] Aaron C. Courville,et al. FiLM: Visual Reasoning with a General Conditioning Layer , 2017, AAAI.
[41] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[42] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[43] Jürgen Schmidhuber,et al. Learning to Reason with Third-Order Tensor Products , 2018, NeurIPS.
[44] Bartunov Sergey,et al. Meta-Learning with Memory-Augmented Neural Networks , 2016 .
[45] Marcin Andrychowicz,et al. Learning to learn by gradient descent by gradient descent , 2016, NIPS.
[46] Hong Yu,et al. Neural Semantic Encoders , 2016, EACL.
[47] Sergio Gomez Colmenarejo,et al. Hybrid computing using a neural network with dynamic external memory , 2016, Nature.
[48] Geoffrey E. Hinton. Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems , 1991 .
[49] Alex Graves,et al. The Kanerva Machine: A Generative Distributed Memory , 2018, ICLR.