Smaug: Fixing Failure Modes of Preference Optimisation with DPO-Positive
Direct Preference Optimisation (DPO) is effective at significantly improving the performance of large language models (LLMs) on downstream tasks such as reasoning, summarisation, and alignment. Using pairs of preferred and dispreferred data, DPO models the relative probability of picking one respons...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Direct Preference Optimisation (DPO) is effective at significantly improving
the performance of large language models (LLMs) on downstream tasks such as
reasoning, summarisation, and alignment. Using pairs of preferred and
dispreferred data, DPO models the relative probability of picking one response
over another. In this work, first we show theoretically that the standard DPO
loss can lead to a reduction of the model's likelihood of the preferred
examples, as long as the relative probability between the preferred and
dispreferred classes increases. We then show empirically that this phenomenon
occurs when fine-tuning LLMs on common datasets, especially datasets in which
the edit distance between pairs of completions is low. Using these insights, we
design DPO-Positive (DPOP), a new loss function and training procedure which
avoids this failure mode. Surprisingly, we find that DPOP outperforms DPO and
other fine-tuning procedures across a wide variety of datasets and downstream
tasks, including datasets with high edit distances between completions.
Furthermore, we find that the DPOP-tuned model outperforms the DPO-tuned model
(all else equal) on benchmarks independent of the fine-tuning data, such as
MT-Bench. Finally, using DPOP, we create and open-source Smaug-34B and
Smaug-72B, with the latter becoming the first open-source LLM to surpass an
average accuracy of 80% on the HuggingFace Open LLM Leaderboard. |
---|---|
DOI: | 10.48550/arxiv.2402.13228 |