Focused Transformer: Contrastive Training for Context Scaling
Large language models have an exceptional capability to incorporate new information in a contextual manner. However, the full potential of such an approach is often restrained due to a limitation in the effective context length. One solution to this issue is to endow an attention layer with access t...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models have an exceptional capability to incorporate new
information in a contextual manner. However, the full potential of such an
approach is often restrained due to a limitation in the effective context
length. One solution to this issue is to endow an attention layer with access
to an external memory, which comprises of (key, value) pairs. Yet, as the
number of documents increases, the proportion of relevant keys to irrelevant
ones decreases, leading the model to focus more on the irrelevant keys. We
identify a significant challenge, dubbed the distraction issue, where keys
linked to different semantic values might overlap, making them hard to
distinguish. To tackle this problem, we introduce the Focused Transformer
(FoT), a technique that employs a training process inspired by contrastive
learning. This novel approach enhances the structure of the (key, value) space,
enabling an extension of the context length. Our method allows for fine-tuning
pre-existing, large-scale models to lengthen their effective context. This is
demonstrated by our fine-tuning of $3B$ and $7B$ OpenLLaMA checkpoints. The
resulting models, which we name LongLLaMA, exhibit advancements in tasks
requiring a long context. We further illustrate that our LongLLaMA models
adeptly manage a $256 k$ context length for passkey retrieval. |
---|---|
DOI: | 10.48550/arxiv.2307.03170 |