Provable Multi-Party Reinforcement Learning with Diverse Human Feedback

Reinforcement learning with human feedback (RLHF) is an emerging paradigm to align models with human preferences. Typically, RLHF aggregates preferences from multiple individuals who have diverse viewpoints that may conflict with each other. Our work \textit{initiates} the theoretical study of multi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhong, Huiying, Deng, Zhun, Su, Weijie J, Wu, Zhiwei Steven, Zhang, Linjun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhong, Huiying
Deng, Zhun
Su, Weijie J
Wu, Zhiwei Steven
Zhang, Linjun
description Reinforcement learning with human feedback (RLHF) is an emerging paradigm to align models with human preferences. Typically, RLHF aggregates preferences from multiple individuals who have diverse viewpoints that may conflict with each other. Our work \textit{initiates} the theoretical study of multi-party RLHF that explicitly models the diverse preferences of multiple individuals. We show how traditional RLHF approaches can fail since learning a single reward function cannot capture and balance the preferences of multiple individuals. To overcome such limitations, we incorporate meta-learning to learn multiple preferences and adopt different social welfare functions to aggregate the preferences across multiple parties. We focus on the offline learning setting and establish sample complexity bounds, along with efficiency and fairness guarantees, for optimizing diverse social welfare functions such as Nash, Utilitarian, and Leximin welfare functions. Our results show a separation between the sample complexities of multi-party RLHF and traditional single-party RLHF. Furthermore, we consider a reward-free setting, where each individual's preference is no longer consistent with a reward model, and give pessimistic variants of the von Neumann Winner based on offline preference data. Taken together, our work showcases the advantage of multi-party RLHF but also highlights its more demanding statistical complexity.
doi_str_mv 10.48550/arxiv.2403.05006
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2403_05006</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2403_05006</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-949cafde6f0b55a1156aec0398727b16192ec3dd9a8fcd4e89863d5d34c697443</originalsourceid><addsrcrecordid>eNotz71OwzAUQGEvHVDLAzDhF0hqxz-xR9TSFimICnWPbuxrsEhc5KaBvj2iMJ3tSB8hd5yV0ijFlpC_41RWkomSKcb0Ddnu83GCrkf6fO7HWOwhjxf6ijGFY3Y4YBppg5BTTG_0K47vdB0nzCeku_MAiW4QfQfuY0FmAfoT3v53Tg6bx8NqVzQv26fVQ1OArnVhpXUQPOrAOqWAc6UBHRPW1FXdcc1thU54b8EE5yUaa7TwygvptK2lFHNy_7e9StrPHAfIl_ZX1F5F4gdcZEZJ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Provable Multi-Party Reinforcement Learning with Diverse Human Feedback</title><source>arXiv.org</source><creator>Zhong, Huiying ; Deng, Zhun ; Su, Weijie J ; Wu, Zhiwei Steven ; Zhang, Linjun</creator><creatorcontrib>Zhong, Huiying ; Deng, Zhun ; Su, Weijie J ; Wu, Zhiwei Steven ; Zhang, Linjun</creatorcontrib><description>Reinforcement learning with human feedback (RLHF) is an emerging paradigm to align models with human preferences. Typically, RLHF aggregates preferences from multiple individuals who have diverse viewpoints that may conflict with each other. Our work \textit{initiates} the theoretical study of multi-party RLHF that explicitly models the diverse preferences of multiple individuals. We show how traditional RLHF approaches can fail since learning a single reward function cannot capture and balance the preferences of multiple individuals. To overcome such limitations, we incorporate meta-learning to learn multiple preferences and adopt different social welfare functions to aggregate the preferences across multiple parties. We focus on the offline learning setting and establish sample complexity bounds, along with efficiency and fairness guarantees, for optimizing diverse social welfare functions such as Nash, Utilitarian, and Leximin welfare functions. Our results show a separation between the sample complexities of multi-party RLHF and traditional single-party RLHF. Furthermore, we consider a reward-free setting, where each individual's preference is no longer consistent with a reward model, and give pessimistic variants of the von Neumann Winner based on offline preference data. Taken together, our work showcases the advantage of multi-party RLHF but also highlights its more demanding statistical complexity.</description><identifier>DOI: 10.48550/arxiv.2403.05006</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning ; Statistics - Methodology</subject><creationdate>2024-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2403.05006$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2403.05006$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhong, Huiying</creatorcontrib><creatorcontrib>Deng, Zhun</creatorcontrib><creatorcontrib>Su, Weijie J</creatorcontrib><creatorcontrib>Wu, Zhiwei Steven</creatorcontrib><creatorcontrib>Zhang, Linjun</creatorcontrib><title>Provable Multi-Party Reinforcement Learning with Diverse Human Feedback</title><description>Reinforcement learning with human feedback (RLHF) is an emerging paradigm to align models with human preferences. Typically, RLHF aggregates preferences from multiple individuals who have diverse viewpoints that may conflict with each other. Our work \textit{initiates} the theoretical study of multi-party RLHF that explicitly models the diverse preferences of multiple individuals. We show how traditional RLHF approaches can fail since learning a single reward function cannot capture and balance the preferences of multiple individuals. To overcome such limitations, we incorporate meta-learning to learn multiple preferences and adopt different social welfare functions to aggregate the preferences across multiple parties. We focus on the offline learning setting and establish sample complexity bounds, along with efficiency and fairness guarantees, for optimizing diverse social welfare functions such as Nash, Utilitarian, and Leximin welfare functions. Our results show a separation between the sample complexities of multi-party RLHF and traditional single-party RLHF. Furthermore, we consider a reward-free setting, where each individual's preference is no longer consistent with a reward model, and give pessimistic variants of the von Neumann Winner based on offline preference data. Taken together, our work showcases the advantage of multi-party RLHF but also highlights its more demanding statistical complexity.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><subject>Statistics - Methodology</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAUQGEvHVDLAzDhF0hqxz-xR9TSFimICnWPbuxrsEhc5KaBvj2iMJ3tSB8hd5yV0ijFlpC_41RWkomSKcb0Ddnu83GCrkf6fO7HWOwhjxf6ijGFY3Y4YBppg5BTTG_0K47vdB0nzCeku_MAiW4QfQfuY0FmAfoT3v53Tg6bx8NqVzQv26fVQ1OArnVhpXUQPOrAOqWAc6UBHRPW1FXdcc1thU54b8EE5yUaa7TwygvptK2lFHNy_7e9StrPHAfIl_ZX1F5F4gdcZEZJ</recordid><startdate>20240307</startdate><enddate>20240307</enddate><creator>Zhong, Huiying</creator><creator>Deng, Zhun</creator><creator>Su, Weijie J</creator><creator>Wu, Zhiwei Steven</creator><creator>Zhang, Linjun</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20240307</creationdate><title>Provable Multi-Party Reinforcement Learning with Diverse Human Feedback</title><author>Zhong, Huiying ; Deng, Zhun ; Su, Weijie J ; Wu, Zhiwei Steven ; Zhang, Linjun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-949cafde6f0b55a1156aec0398727b16192ec3dd9a8fcd4e89863d5d34c697443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><topic>Statistics - Methodology</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhong, Huiying</creatorcontrib><creatorcontrib>Deng, Zhun</creatorcontrib><creatorcontrib>Su, Weijie J</creatorcontrib><creatorcontrib>Wu, Zhiwei Steven</creatorcontrib><creatorcontrib>Zhang, Linjun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhong, Huiying</au><au>Deng, Zhun</au><au>Su, Weijie J</au><au>Wu, Zhiwei Steven</au><au>Zhang, Linjun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Provable Multi-Party Reinforcement Learning with Diverse Human Feedback</atitle><date>2024-03-07</date><risdate>2024</risdate><abstract>Reinforcement learning with human feedback (RLHF) is an emerging paradigm to align models with human preferences. Typically, RLHF aggregates preferences from multiple individuals who have diverse viewpoints that may conflict with each other. Our work \textit{initiates} the theoretical study of multi-party RLHF that explicitly models the diverse preferences of multiple individuals. We show how traditional RLHF approaches can fail since learning a single reward function cannot capture and balance the preferences of multiple individuals. To overcome such limitations, we incorporate meta-learning to learn multiple preferences and adopt different social welfare functions to aggregate the preferences across multiple parties. We focus on the offline learning setting and establish sample complexity bounds, along with efficiency and fairness guarantees, for optimizing diverse social welfare functions such as Nash, Utilitarian, and Leximin welfare functions. Our results show a separation between the sample complexities of multi-party RLHF and traditional single-party RLHF. Furthermore, we consider a reward-free setting, where each individual's preference is no longer consistent with a reward model, and give pessimistic variants of the von Neumann Winner based on offline preference data. Taken together, our work showcases the advantage of multi-party RLHF but also highlights its more demanding statistical complexity.</abstract><doi>10.48550/arxiv.2403.05006</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2403.05006
ispartof
issn
language eng
recordid cdi_arxiv_primary_2403_05006
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Statistics - Machine Learning
Statistics - Methodology
title Provable Multi-Party Reinforcement Learning with Diverse Human Feedback
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T18%3A58%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Provable%20Multi-Party%20Reinforcement%20Learning%20with%20Diverse%20Human%20Feedback&rft.au=Zhong,%20Huiying&rft.date=2024-03-07&rft_id=info:doi/10.48550/arxiv.2403.05006&rft_dat=%3Carxiv_GOX%3E2403_05006%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true