Long Timescale Credit Assignment in NeuralNetworks with External Memory
Credit assignment in traditional recurrent neural networks usually involves back-propagating through a long chain of tied weight matrices. The length of this chain scales linearly with the number of time-steps as the same network is run at each time-step. This creates many problems, such as vanishin...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Credit assignment in traditional recurrent neural networks usually involves
back-propagating through a long chain of tied weight matrices. The length of
this chain scales linearly with the number of time-steps as the same network is
run at each time-step. This creates many problems, such as vanishing gradients,
that have been well studied. In contrast, a NNEM's architecture recurrent
activity doesn't involve a long chain of activity (though some architectures
such as the NTM do utilize a traditional recurrent architecture as a
controller). Rather, the externally stored embedding vectors are used at each
time-step, but no messages are passed from previous time-steps. This means that
vanishing gradients aren't a problem, as all of the necessary gradient paths
are short. However, these paths are extremely numerous (one per embedding
vector in memory) and reused for a very long time (until it leaves the memory).
Thus, the forward-pass information of each memory must be stored for the entire
duration of the memory. This is problematic as this additional storage far
surpasses that of the actual memories, to the extent that large memories on
infeasible to back-propagate through in high dimensional settings. One way to
get around the need to hold onto forward-pass information is to recalculate the
forward-pass whenever gradient information is available. However, if the
observations are too large to store in the domain of interest, direct
reinstatement of a forward pass cannot occur. Instead, we rely on a learned
autoencoder to reinstate the observation, and then use the embedding network to
recalculate the forward-pass. Since the recalculated embedding vector is
unlikely to perfectly match the one stored in memory, we try out 2
approximations to utilize error gradient w.r.t. the vector in memory. |
---|---|
DOI: | 10.48550/arxiv.1701.03866 |