LRHP: Learning Representations for Human Preferences via Preference Pairs
To improve human-preference alignment training, current research has developed numerous preference datasets consisting of preference pairs labeled as "preferred" or "dispreferred". These preference pairs are typically used to encode human preferences into a single numerical value...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | To improve human-preference alignment training, current research has
developed numerous preference datasets consisting of preference pairs labeled
as "preferred" or "dispreferred". These preference pairs are typically used to
encode human preferences into a single numerical value through reward modeling,
which acts as a reward signal during reinforcement learning from human feedback
(RLHF). However, representing these human preferences as a numerical value
complicates the analysis of these preferences and restricts their broader
applications other than RLHF. In contrast, in this work, we introduce a
preference representation learning task that aims to construct a richer and
more structured representation of human preferences. We further develop a more
generalizable framework, Learning Representations for Human Preferences via
preference pairs (namely LRHP), which extends beyond traditional reward
modeling to tackle this task. We verify the utility of preference
representations in two downstream tasks: preference data selection and
preference margin prediction. Building upon the human preferences in
representations, we achieve strong performance in both tasks, significantly
outperforming baselines. |
---|---|
DOI: | 10.48550/arxiv.2410.04503 |