Credit assignment in traditional recurrent neural networks usually involves back-propagating through a long chain of tied weight matrices. The length of this chain scales linearly with the number of time-steps as the same network is run at each time-step. This creates many problems, such as vanishing gradients, that have been well studied. In contrast, a NNEM's architecture recurrent activity doesn't involve a long chain of activity (though some architectures such as the NTM do utilize a traditional recurrent architecture as a controller). Rather, the externally stored embedding vectors are used at each time-step, but no messages are passed from previous time-steps. This means that vanishing gradients aren't a problem, as all of the necessary gradient paths are short. However, these paths are extremely numerous (one per embedding vector in memory) and reused for a very long time (until it leaves the memory). Thus, the forward-pass information of each memory must be stored for the entire duration of the memory. This is problematic as this additional storage far surpasses that of the actual memories, to the extent that large memories on infeasible to back-propagate through in high dimensional settings. One way to get around the need to hold onto forward-pass information is to recalculate the forward-pass whenever gradient information is available. However, if the observations are too large to store in the domain of interest, direct reinstatement of a forward pass cannot occur. Instead, we rely on a learned autoencoder to reinstate the observation, and then use the embedding network to recalculate the forward-pass. Since the recalculated embedding vector is unlikely to perfectly match the one stored in memory, we try out 2 approximations to utilize error gradient w.r.t. the vector in memory.
[1]
James L. McClelland,et al.
Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.
,
1995,
Psychological review.
[2]
Jason Weston,et al.
Memory Networks
,
2014,
ICLR.
[3]
Alex Graves,et al.
Neural Turing Machines
,
2014,
ArXiv.
[4]
Joel Z. Leibo,et al.
Model-Free Episodic Control
,
2016,
ArXiv.
[5]
Max Welling,et al.
Auto-Encoding Variational Bayes
,
2013,
ICLR.
[6]
Alex Graves,et al.
Decoupled Neural Interfaces using Synthetic Gradients
,
2016,
ICML.
[7]
Sepp Hochreiter,et al.
The Vanishing Gradient Problem During Learning Recurrent Neural Nets and Problem Solutions
,
1998,
Int. J. Uncertain. Fuzziness Knowl. Based Syst..
[8]
Sergio Gomez Colmenarejo,et al.
Hybrid computing using a neural network with dynamic external memory
,
2016,
Nature.