WARM: On the Benefits of Weight Averaged Reward Models
Aligning large language models (LLMs) with human preferences through reinforcement learning (RLHF) can lead to reward hacking, where LLMs exploit failures in the reward model (RM) to achieve seemingly high rewards without meeting the underlying objectives. We identify two primary challenges when des...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Ramé, Alexandre Vieillard, Nino Hussenot, Léonard Dadashi, Robert Cideron, Geoffrey Bachem, Olivier Ferret, Johan |
description | Aligning large language models (LLMs) with human preferences through
reinforcement learning (RLHF) can lead to reward hacking, where LLMs exploit
failures in the reward model (RM) to achieve seemingly high rewards without
meeting the underlying objectives. We identify two primary challenges when
designing RMs to mitigate reward hacking: distribution shifts during the RL
process and inconsistencies in human preferences. As a solution, we propose
Weight Averaged Reward Models (WARM), first fine-tuning multiple RMs, then
averaging them in the weight space. This strategy follows the observation that
fine-tuned weights remain linearly mode connected when sharing the same
pre-training. By averaging weights, WARM improves efficiency compared to the
traditional ensembling of predictions, while improving reliability under
distribution shifts and robustness to preference inconsistencies. Our
experiments on summarization tasks, using best-of-N and RL methods, shows that
WARM improves the overall quality and alignment of LLM predictions; for
example, a policy RL fine-tuned with WARM has a 79.4% win rate against a policy
RL fine-tuned with a single RM. |
doi_str_mv | 10.48550/arxiv.2401.12187 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_12187</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_12187</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-c57e6184c6caefc0508ac98a98f975b7fb5ff88090df31acfafbc99e9e05b96b3</originalsourceid><addsrcrecordid>eNotj8uKwjAUQLNxMagfMCvzA62JbV7uqjjOQEUQwWW5Se_Vgi_S4ujfyziuzu5wDmOfUqS5VUqMId6bWzrJhUzlRFrzwfSu2KymfH3m3QH5DM9ITdfyC_EdNvtDx4sbRthjzTf4C7Hmq0uNx3bAegTHFodv9tn2a7GdfyflevkzL8oEtDFJUAa1tHnQAZCCUMJCcBacJWeUN-QVkbXCiZoyCYGAfHAOHQrlnfZZn43-ta_w6hqbE8RH9TdQvQayJzQ1P8M</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>WARM: On the Benefits of Weight Averaged Reward Models</title><source>arXiv.org</source><creator>Ramé, Alexandre ; Vieillard, Nino ; Hussenot, Léonard ; Dadashi, Robert ; Cideron, Geoffrey ; Bachem, Olivier ; Ferret, Johan</creator><creatorcontrib>Ramé, Alexandre ; Vieillard, Nino ; Hussenot, Léonard ; Dadashi, Robert ; Cideron, Geoffrey ; Bachem, Olivier ; Ferret, Johan</creatorcontrib><description>Aligning large language models (LLMs) with human preferences through
reinforcement learning (RLHF) can lead to reward hacking, where LLMs exploit
failures in the reward model (RM) to achieve seemingly high rewards without
meeting the underlying objectives. We identify two primary challenges when
designing RMs to mitigate reward hacking: distribution shifts during the RL
process and inconsistencies in human preferences. As a solution, we propose
Weight Averaged Reward Models (WARM), first fine-tuning multiple RMs, then
averaging them in the weight space. This strategy follows the observation that
fine-tuned weights remain linearly mode connected when sharing the same
pre-training. By averaging weights, WARM improves efficiency compared to the
traditional ensembling of predictions, while improving reliability under
distribution shifts and robustness to preference inconsistencies. Our
experiments on summarization tasks, using best-of-N and RL methods, shows that
WARM improves the overall quality and alignment of LLM predictions; for
example, a policy RL fine-tuned with WARM has a 79.4% win rate against a policy
RL fine-tuned with a single RM.</description><identifier>DOI: 10.48550/arxiv.2401.12187</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-01</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.12187$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.12187$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ramé, Alexandre</creatorcontrib><creatorcontrib>Vieillard, Nino</creatorcontrib><creatorcontrib>Hussenot, Léonard</creatorcontrib><creatorcontrib>Dadashi, Robert</creatorcontrib><creatorcontrib>Cideron, Geoffrey</creatorcontrib><creatorcontrib>Bachem, Olivier</creatorcontrib><creatorcontrib>Ferret, Johan</creatorcontrib><title>WARM: On the Benefits of Weight Averaged Reward Models</title><description>Aligning large language models (LLMs) with human preferences through
reinforcement learning (RLHF) can lead to reward hacking, where LLMs exploit
failures in the reward model (RM) to achieve seemingly high rewards without
meeting the underlying objectives. We identify two primary challenges when
designing RMs to mitigate reward hacking: distribution shifts during the RL
process and inconsistencies in human preferences. As a solution, we propose
Weight Averaged Reward Models (WARM), first fine-tuning multiple RMs, then
averaging them in the weight space. This strategy follows the observation that
fine-tuned weights remain linearly mode connected when sharing the same
pre-training. By averaging weights, WARM improves efficiency compared to the
traditional ensembling of predictions, while improving reliability under
distribution shifts and robustness to preference inconsistencies. Our
experiments on summarization tasks, using best-of-N and RL methods, shows that
WARM improves the overall quality and alignment of LLM predictions; for
example, a policy RL fine-tuned with WARM has a 79.4% win rate against a policy
RL fine-tuned with a single RM.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8uKwjAUQLNxMagfMCvzA62JbV7uqjjOQEUQwWW5Se_Vgi_S4ujfyziuzu5wDmOfUqS5VUqMId6bWzrJhUzlRFrzwfSu2KymfH3m3QH5DM9ITdfyC_EdNvtDx4sbRthjzTf4C7Hmq0uNx3bAegTHFodv9tn2a7GdfyflevkzL8oEtDFJUAa1tHnQAZCCUMJCcBacJWeUN-QVkbXCiZoyCYGAfHAOHQrlnfZZn43-ta_w6hqbE8RH9TdQvQayJzQ1P8M</recordid><startdate>20240122</startdate><enddate>20240122</enddate><creator>Ramé, Alexandre</creator><creator>Vieillard, Nino</creator><creator>Hussenot, Léonard</creator><creator>Dadashi, Robert</creator><creator>Cideron, Geoffrey</creator><creator>Bachem, Olivier</creator><creator>Ferret, Johan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240122</creationdate><title>WARM: On the Benefits of Weight Averaged Reward Models</title><author>Ramé, Alexandre ; Vieillard, Nino ; Hussenot, Léonard ; Dadashi, Robert ; Cideron, Geoffrey ; Bachem, Olivier ; Ferret, Johan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-c57e6184c6caefc0508ac98a98f975b7fb5ff88090df31acfafbc99e9e05b96b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ramé, Alexandre</creatorcontrib><creatorcontrib>Vieillard, Nino</creatorcontrib><creatorcontrib>Hussenot, Léonard</creatorcontrib><creatorcontrib>Dadashi, Robert</creatorcontrib><creatorcontrib>Cideron, Geoffrey</creatorcontrib><creatorcontrib>Bachem, Olivier</creatorcontrib><creatorcontrib>Ferret, Johan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ramé, Alexandre</au><au>Vieillard, Nino</au><au>Hussenot, Léonard</au><au>Dadashi, Robert</au><au>Cideron, Geoffrey</au><au>Bachem, Olivier</au><au>Ferret, Johan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>WARM: On the Benefits of Weight Averaged Reward Models</atitle><date>2024-01-22</date><risdate>2024</risdate><abstract>Aligning large language models (LLMs) with human preferences through
reinforcement learning (RLHF) can lead to reward hacking, where LLMs exploit
failures in the reward model (RM) to achieve seemingly high rewards without
meeting the underlying objectives. We identify two primary challenges when
designing RMs to mitigate reward hacking: distribution shifts during the RL
process and inconsistencies in human preferences. As a solution, we propose
Weight Averaged Reward Models (WARM), first fine-tuning multiple RMs, then
averaging them in the weight space. This strategy follows the observation that
fine-tuned weights remain linearly mode connected when sharing the same
pre-training. By averaging weights, WARM improves efficiency compared to the
traditional ensembling of predictions, while improving reliability under
distribution shifts and robustness to preference inconsistencies. Our
experiments on summarization tasks, using best-of-N and RL methods, shows that
WARM improves the overall quality and alignment of LLM predictions; for
example, a policy RL fine-tuned with WARM has a 79.4% win rate against a policy
RL fine-tuned with a single RM.</abstract><doi>10.48550/arxiv.2401.12187</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2401.12187 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2401_12187 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computation and Language Computer Science - Learning |
title | WARM: On the Benefits of Weight Averaged Reward Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T06%3A46%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=WARM:%20On%20the%20Benefits%20of%20Weight%20Averaged%20Reward%20Models&rft.au=Ram%C3%A9,%20Alexandre&rft.date=2024-01-22&rft_id=info:doi/10.48550/arxiv.2401.12187&rft_dat=%3Carxiv_GOX%3E2401_12187%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |