Chain of Alignment: Integrating Public Will with Expert Intelligence for Language Model Alignment
We introduce a method to measure the alignment between public will and language model (LM) behavior that can be applied to fine-tuning, online oversight, and pre-release safety checks. Our `chain of alignment' (CoA) approach produces a rule based reward (RBR) by creating model behavior $\textit...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We introduce a method to measure the alignment between public will and
language model (LM) behavior that can be applied to fine-tuning, online
oversight, and pre-release safety checks. Our `chain of alignment' (CoA)
approach produces a rule based reward (RBR) by creating model behavior
$\textit{rules}$ aligned to normative $\textit{objectives}$ aligned to
$\textit{public will}$. This factoring enables a nonexpert public to directly
specify their will through the normative objectives, while expert intelligence
is used to figure out rules entailing model behavior that best achieves those
objectives. We validate our approach by applying it across three different
domains of LM prompts related to mental health. We demonstrate a public input
process built on collective dialogues and bridging-based ranking that reliably
produces normative objectives supported by at least $96\% \pm 2\%$ of the US
public. We then show that rules developed by mental health experts to achieve
those objectives enable a RBR that evaluates an LM response's alignment with
the objectives similarly to human experts (Pearson's $r=0.841$, $AUC=0.964$).
By measuring alignment with objectives that have near unanimous public support,
these CoA RBRs provide an approximate measure of alignment between LM behavior
and public will. |
---|---|
DOI: | 10.48550/arxiv.2411.10534 |