Diverging Preferences: When do Annotators Disagree and do Models Know?
We examine diverging preferences in human-labeled preference datasets. We develop a taxonomy of disagreement sources spanning 10 categories across four high-level classes -- task underspecification, response style, refusals, and annotation errors. We find that the majority of disagreements are in op...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We examine diverging preferences in human-labeled preference datasets. We
develop a taxonomy of disagreement sources spanning 10 categories across four
high-level classes -- task underspecification, response style, refusals, and
annotation errors. We find that the majority of disagreements are in opposition
with standard reward modeling approaches, which are designed with the
assumption that annotator disagreement is noise. We then explore how these
findings impact two areas of LLM development: reward modeling and evaluation.
In our experiments, we demonstrate how standard reward modeling methods, like
the Bradley-Terry model, fail to differentiate whether a given preference
judgment is the result of unanimous agreement among annotators or the majority
opinion among diverging user preferences. We also find that these tendencies
are also echoed by popular LLM-as-Judge evaluation methods, which consistently
identify a winning response in cases of diverging preferences. These findings
highlight remaining challenges in LLM evaluations, which are greatly influenced
by divisive features like response style, and in developing pluralistically
aligned LLMs. To address these issues, we develop methods for identifying
diverging preferences to mitigate their influence on evaluation and training. |
---|---|
DOI: | 10.48550/arxiv.2410.14632 |