On Retrieval Augmentation and the Limitations of Language Model Training
Augmenting a language model (LM) with $k$-nearest neighbors ($k$NN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive. In this work, we rule out one previously posited possibility -- the "softmax bottleneck." We then cre...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Augmenting a language model (LM) with $k$-nearest neighbors ($k$NN) retrieval
on its training data alone can decrease its perplexity, though the underlying
reasons for this remain elusive. In this work, we rule out one previously
posited possibility -- the "softmax bottleneck." We then create a new dataset
to evaluate LM generalization ability in the setting where training data
contains additional information that is not causally relevant. This task is
challenging even for GPT-3.5 Turbo. We show that, for both GPT-2 and Mistral
7B, $k$NN retrieval augmentation consistently improves performance in this
setting. Finally, to make $k$NN retrieval more accessible, we propose using a
multi-layer perceptron model that maps datastore keys to values as a drop-in
replacement for traditional retrieval. This reduces storage costs by over 25x. |
---|---|
DOI: | 10.48550/arxiv.2311.09615 |