The Capacity for Moral Self-Correction in Large Language Models
We test the hypothesis that language models trained with reinforcement learning from human feedback (RLHF) have the capability to "morally self-correct" -- to avoid producing harmful outputs -- if instructed to do so. We find strong evidence in support of this hypothesis across three diffe...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We test the hypothesis that language models trained with reinforcement
learning from human feedback (RLHF) have the capability to "morally
self-correct" -- to avoid producing harmful outputs -- if instructed to do so.
We find strong evidence in support of this hypothesis across three different
experiments, each of which reveal different facets of moral self-correction. We
find that the capability for moral self-correction emerges at 22B model
parameters, and typically improves with increasing model size and RLHF
training. We believe that at this level of scale, language models obtain two
capabilities that they can use for moral self-correction: (1) they can follow
instructions and (2) they can learn complex normative concepts of harm like
stereotyping, bias, and discrimination. As such, they can follow instructions
to avoid certain kinds of morally harmful outputs. We believe our results are
cause for cautious optimism regarding the ability to train language models to
abide by ethical principles. |
---|---|
DOI: | 10.48550/arxiv.2302.07459 |