Coherence boosting: When your pretrained language model is not paying enough attention
Long-range semantic coherence remains a challenge in automatic language generation and understanding. We demonstrate that large language models have insufficiently learned the effect of distant words on next-token prediction. We present coherence boosting, an inference procedure that increases a LM&...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Long-range semantic coherence remains a challenge in automatic language
generation and understanding. We demonstrate that large language models have
insufficiently learned the effect of distant words on next-token prediction. We
present coherence boosting, an inference procedure that increases a LM's focus
on a long context. We show the benefits of coherence boosting with pretrained
models by distributional analyses of generated ordinary text and dialog
responses. It is also found that coherence boosting with state-of-the-art
models for various zero-shot NLP tasks yields performance gains with no
additional training. |
---|---|
DOI: | 10.48550/arxiv.2110.08294 |