Memory Transformer
Transformer-based models have achieved state-of-the-art results in many natural language processing tasks. The self-attention architecture allows transformer to combine information from all elements of a sequence into context-aware representations. However, information about the context is stored mo...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformer-based models have achieved state-of-the-art results in many
natural language processing tasks. The self-attention architecture allows
transformer to combine information from all elements of a sequence into
context-aware representations. However, information about the context is stored
mostly in the same element-wise representations. This might limit the
processing of properties related to the sequence as a whole more difficult.
Adding trainable memory to selectively store local as well as global
representations of a sequence is a promising direction to improve the
Transformer model. Memory-augmented neural networks (MANNs) extend traditional
neural architectures with general-purpose memory for representations. MANNs
have demonstrated the capability to learn simple algorithms like Copy or
Reverse and can be successfully trained via backpropagation on diverse tasks
from question answering to language modeling outperforming RNNs and LSTMs of
comparable complexity. In this work, we propose and study few extensions of the
Transformer baseline (1) by adding memory tokens to store non-local
representations, (2) creating memory bottleneck for the global information, (3)
controlling memory update with dedicated layer. We evaluate these memory
augmented Transformers and demonstrate that presence of memory positively
correlates with the model performance for machine translation and language
modelling tasks. Augmentation of pre-trained masked language model with memory
tokens shows mixed results for tasks from GLUE benchmark. Visualization of
attention patterns over the memory suggest that it improves the model's ability
to process a global context. |
---|---|
DOI: | 10.48550/arxiv.2006.11527 |