Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning
Recurrent neural networks have a strong inductive bias towards learning temporally compressed representations, as the entire history of a sequence is represented by a single vector. By contrast, Transformers have little inductive bias towards learning temporally compressed representations, as they a...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recurrent neural networks have a strong inductive bias towards learning
temporally compressed representations, as the entire history of a sequence is
represented by a single vector. By contrast, Transformers have little inductive
bias towards learning temporally compressed representations, as they allow for
attention over all previously computed elements in a sequence. Having a more
compressed representation of a sequence may be beneficial for generalization,
as a high-level representation may be more easily re-used and re-purposed and
will contain fewer irrelevant details. At the same time, excessive compression
of representations comes at the cost of expressiveness. We propose a solution
which divides computation into two streams. A slow stream that is recurrent in
nature aims to learn a specialized and compressed representation, by forcing
chunks of $K$ time steps into a single representation which is divided into
multiple vectors. At the same time, a fast stream is parameterized as a
Transformer to process chunks consisting of $K$ time-steps conditioned on the
information in the slow-stream. In the proposed approach we hope to gain the
expressiveness of the Transformer, while encouraging better compression and
structuring of representations in the slow stream. We show the benefits of the
proposed method in terms of improved sample efficiency and generalization
performance as compared to various competitive baselines for visual perception
and sequential decision making tasks. |
---|---|
DOI: | 10.48550/arxiv.2205.14794 |