RLCD: Reinforcement Learning from Contrastive Distillation for Language Model Alignment
We propose Reinforcement Learning from Contrastive Distillation (RLCD), a method for aligning language models to follow principles expressed in natural language (e.g., to be more harmless) without using human feedback. RLCD creates preference pairs from two contrasting model outputs, one using a pos...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose Reinforcement Learning from Contrastive Distillation (RLCD), a
method for aligning language models to follow principles expressed in natural
language (e.g., to be more harmless) without using human feedback. RLCD creates
preference pairs from two contrasting model outputs, one using a positive
prompt designed to encourage following the given principles, and one using a
negative prompt designed to encourage violating them. Using two different
prompts causes model outputs to be more differentiated on average, resulting in
cleaner preference labels in the absence of human annotations. We then use the
preference pairs to train a preference model, which is in turn used to improve
a base unaligned language model via reinforcement learning. Empirically, RLCD
outperforms RLAIF (Bai et al., 2022b) and context distillation (Huang et al.,
2022) baselines across three diverse alignment tasks--harmlessness,
helpfulness, and story outline generation--and when using both 7B and 30B model
scales for simulating preference data. |
---|---|
DOI: | 10.48550/arxiv.2307.12950 |