Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree?
Reward Models (RMs) are crucial for aligning language models with human preferences. Currently, the evaluation of RMs depends on measuring accuracy against a validation set of manually annotated preference data. Although this method is straightforward and widely adopted, the relationship between RM...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reward Models (RMs) are crucial for aligning language models with human
preferences. Currently, the evaluation of RMs depends on measuring accuracy
against a validation set of manually annotated preference data. Although this
method is straightforward and widely adopted, the relationship between RM
accuracy and downstream policy performance remains under-explored. In this
work, we conduct experiments in a synthetic setting to investigate how
differences in RM measured by accuracy translate into gaps in optimized policy
performance. Our findings reveal that while there is a weak positive
correlation between accuracy and downstream performance, policies optimized
towards RMs with similar accuracy can exhibit quite different performance.
Moreover, we discover that the way of measuring accuracy significantly impacts
its ability to predict the final policy performance. Through the lens of the
Regressional Goodhart effect, we recognize that accuracy, when used for
measuring RM quality, can fail to fully capture the potential RM
overoptimization. This underscores the inadequacy of relying solely on accuracy
to reflect their impact on policy optimization. |
---|---|
DOI: | 10.48550/arxiv.2410.05584 |