Exploring Length Generalization in Large Language Models
The ability to extrapolate from short problem instances to longer ones is an important form of out-of-distribution generalization in reasoning tasks, and is crucial when learning from datasets where longer problem instances are rare. These include theorem proving, solving quantitative mathematics pr...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The ability to extrapolate from short problem instances to longer ones is an
important form of out-of-distribution generalization in reasoning tasks, and is
crucial when learning from datasets where longer problem instances are rare.
These include theorem proving, solving quantitative mathematics problems, and
reading/summarizing novels. In this paper, we run careful empirical studies
exploring the length generalization capabilities of transformer-based language
models. We first establish that naively finetuning transformers on length
generalization tasks shows significant generalization deficiencies independent
of model scale. We then show that combining pretrained large language models'
in-context learning abilities with scratchpad prompting (asking the model to
output solution steps before producing an answer) results in a dramatic
improvement in length generalization. We run careful failure analyses on each
of the learning modalities and identify common sources of mistakes that
highlight opportunities in equipping language models with the ability to
generalize to longer problems. |
---|---|
DOI: | 10.48550/arxiv.2207.04901 |