Contrastive Preference Learning: Learning from Human Feedback without RL

Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinfor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-04
Hauptverfasser: Hejna, Joey, Rafailov, Rafael, Sikchi, Harshit, Finn, Chelsea, Niekum, Scott, W Bradley Knox, Sadigh, Dorsa
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Hejna, Joey
Rafailov, Rafael
Sikchi, Harshit
Finn, Chelsea
Niekum, Scott
W Bradley Knox
Sadigh, Dorsa
description Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning (RL). This paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the regret under the user's optimal policy. Thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the RL phase. Because of these optimization challenges, contemporary RLHF methods restrict themselves to contextual bandit settings (e.g., as in large language models) or limit observation dimensionality (e.g., state-based robotics). We overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret-based model of human preferences. Using the principle of maximum entropy, we derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for RL. CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs. This enables CPL to elegantly scale to high-dimensional and sequential RLHF problems while being simpler than prior methods.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2880584527</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2880584527</sourcerecordid><originalsourceid>FETCH-proquest_journals_28805845273</originalsourceid><addsrcrecordid>eNqNirEKwjAUAIMgWLT_EHAuxKSxwbVYOnQQcS-xvmqqTfQl0d_XQZyd7uBuQhIuxCpTOeczkno_MMb4uuBSioTUpbMBtQ_mCXSH0AOC7YA2oNEae978jPboRlrHUVtaAZyOurvSlwkXFwPdNwsy7fXNQ_rlnCyr7aGsszu6RwQf2sFFtJ_UcqWYVLnkhfjvegMsPTu-</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2880584527</pqid></control><display><type>article</type><title>Contrastive Preference Learning: Learning from Human Feedback without RL</title><source>Free E- Journals</source><creator>Hejna, Joey ; Rafailov, Rafael ; Sikchi, Harshit ; Finn, Chelsea ; Niekum, Scott ; W Bradley Knox ; Sadigh, Dorsa</creator><creatorcontrib>Hejna, Joey ; Rafailov, Rafael ; Sikchi, Harshit ; Finn, Chelsea ; Niekum, Scott ; W Bradley Knox ; Sadigh, Dorsa</creatorcontrib><description>Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning (RL). This paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the regret under the user's optimal policy. Thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the RL phase. Because of these optimization challenges, contemporary RLHF methods restrict themselves to contextual bandit settings (e.g., as in large language models) or limit observation dimensionality (e.g., state-based robotics). We overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret-based model of human preferences. Using the principle of maximum entropy, we derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for RL. CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs. This enables CPL to elegantly scale to high-dimensional and sequential RLHF problems while being simpler than prior methods.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Feedback ; Large language models ; Machine learning ; Maximum entropy ; Optimization ; Preferences ; Robotics</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Hejna, Joey</creatorcontrib><creatorcontrib>Rafailov, Rafael</creatorcontrib><creatorcontrib>Sikchi, Harshit</creatorcontrib><creatorcontrib>Finn, Chelsea</creatorcontrib><creatorcontrib>Niekum, Scott</creatorcontrib><creatorcontrib>W Bradley Knox</creatorcontrib><creatorcontrib>Sadigh, Dorsa</creatorcontrib><title>Contrastive Preference Learning: Learning from Human Feedback without RL</title><title>arXiv.org</title><description>Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning (RL). This paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the regret under the user's optimal policy. Thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the RL phase. Because of these optimization challenges, contemporary RLHF methods restrict themselves to contextual bandit settings (e.g., as in large language models) or limit observation dimensionality (e.g., state-based robotics). We overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret-based model of human preferences. Using the principle of maximum entropy, we derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for RL. CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs. This enables CPL to elegantly scale to high-dimensional and sequential RLHF problems while being simpler than prior methods.</description><subject>Algorithms</subject><subject>Feedback</subject><subject>Large language models</subject><subject>Machine learning</subject><subject>Maximum entropy</subject><subject>Optimization</subject><subject>Preferences</subject><subject>Robotics</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNirEKwjAUAIMgWLT_EHAuxKSxwbVYOnQQcS-xvmqqTfQl0d_XQZyd7uBuQhIuxCpTOeczkno_MMb4uuBSioTUpbMBtQ_mCXSH0AOC7YA2oNEae978jPboRlrHUVtaAZyOurvSlwkXFwPdNwsy7fXNQ_rlnCyr7aGsszu6RwQf2sFFtJ_UcqWYVLnkhfjvegMsPTu-</recordid><startdate>20240430</startdate><enddate>20240430</enddate><creator>Hejna, Joey</creator><creator>Rafailov, Rafael</creator><creator>Sikchi, Harshit</creator><creator>Finn, Chelsea</creator><creator>Niekum, Scott</creator><creator>W Bradley Knox</creator><creator>Sadigh, Dorsa</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240430</creationdate><title>Contrastive Preference Learning: Learning from Human Feedback without RL</title><author>Hejna, Joey ; Rafailov, Rafael ; Sikchi, Harshit ; Finn, Chelsea ; Niekum, Scott ; W Bradley Knox ; Sadigh, Dorsa</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28805845273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Feedback</topic><topic>Large language models</topic><topic>Machine learning</topic><topic>Maximum entropy</topic><topic>Optimization</topic><topic>Preferences</topic><topic>Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Hejna, Joey</creatorcontrib><creatorcontrib>Rafailov, Rafael</creatorcontrib><creatorcontrib>Sikchi, Harshit</creatorcontrib><creatorcontrib>Finn, Chelsea</creatorcontrib><creatorcontrib>Niekum, Scott</creatorcontrib><creatorcontrib>W Bradley Knox</creatorcontrib><creatorcontrib>Sadigh, Dorsa</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hejna, Joey</au><au>Rafailov, Rafael</au><au>Sikchi, Harshit</au><au>Finn, Chelsea</au><au>Niekum, Scott</au><au>W Bradley Knox</au><au>Sadigh, Dorsa</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Contrastive Preference Learning: Learning from Human Feedback without RL</atitle><jtitle>arXiv.org</jtitle><date>2024-04-30</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular paradigm for aligning models with human intent. Typically RLHF algorithms operate in two phases: first, use human preferences to learn a reward function and second, align the model by optimizing the learned reward via reinforcement learning (RL). This paradigm assumes that human preferences are distributed according to reward, but recent work suggests that they instead follow the regret under the user's optimal policy. Thus, learning a reward function from feedback is not only based on a flawed assumption of human preference, but also leads to unwieldy optimization challenges that stem from policy gradients or bootstrapping in the RL phase. Because of these optimization challenges, contemporary RLHF methods restrict themselves to contextual bandit settings (e.g., as in large language models) or limit observation dimensionality (e.g., state-based robotics). We overcome these limitations by introducing a new family of algorithms for optimizing behavior from human feedback using the regret-based model of human preferences. Using the principle of maximum entropy, we derive Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions, circumventing the need for RL. CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs. This enables CPL to elegantly scale to high-dimensional and sequential RLHF problems while being simpler than prior methods.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2880584527
source Free E- Journals
subjects Algorithms
Feedback
Large language models
Machine learning
Maximum entropy
Optimization
Preferences
Robotics
title Contrastive Preference Learning: Learning from Human Feedback without RL
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T13%3A07%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Contrastive%20Preference%20Learning:%20Learning%20from%20Human%20Feedback%20without%20RL&rft.jtitle=arXiv.org&rft.au=Hejna,%20Joey&rft.date=2024-04-30&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2880584527%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2880584527&rft_id=info:pmid/&rfr_iscdi=true