Does Instruction Tuning Make LLMs More Consistent?
The purpose of instruction tuning is enabling zero-shot performance, but instruction tuning has also been shown to improve chain-of-thought reasoning and value alignment (Si et al., 2023). Here we consider the impact on $\textit{consistency}$, i.e., the sensitivity of language models to small pertur...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The purpose of instruction tuning is enabling zero-shot performance, but
instruction tuning has also been shown to improve chain-of-thought reasoning
and value alignment (Si et al., 2023). Here we consider the impact on
$\textit{consistency}$, i.e., the sensitivity of language models to small
perturbations in the input. We compare 10 instruction-tuned LLaMA models to the
original LLaMA-7b model and show that almost across-the-board they become more
consistent, both in terms of their representations and their predictions in
zero-shot and downstream tasks. We explain these improvements through
mechanistic analyses of factual recall. |
---|---|
DOI: | 10.48550/arxiv.2404.15206 |