LRHP: Learning Representations for Human Preferences via Preference Pairs

To improve human-preference alignment training, current research has developed numerous preference datasets consisting of preference pairs labeled as "preferred" or "dispreferred". These preference pairs are typically used to encode human preferences into a single numerical value...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wang, Chenglong, Gan, Yang, Huo, Yifu, Mu, Yongyu, He, Qiaozhi, Yang, Murun, Xiao, Tong, Zhang, Chunliang, Liu, Tongran, Zhu, Jingbo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wang, Chenglong
Gan, Yang
Huo, Yifu
Mu, Yongyu
He, Qiaozhi
Yang, Murun
Xiao, Tong
Zhang, Chunliang
Liu, Tongran
Zhu, Jingbo
description To improve human-preference alignment training, current research has developed numerous preference datasets consisting of preference pairs labeled as "preferred" or "dispreferred". These preference pairs are typically used to encode human preferences into a single numerical value through reward modeling, which acts as a reward signal during reinforcement learning from human feedback (RLHF). However, representing these human preferences as a numerical value complicates the analysis of these preferences and restricts their broader applications other than RLHF. In contrast, in this work, we introduce a preference representation learning task that aims to construct a richer and more structured representation of human preferences. We further develop a more generalizable framework, Learning Representations for Human Preferences via preference pairs (namely LRHP), which extends beyond traditional reward modeling to tackle this task. We verify the utility of preference representations in two downstream tasks: preference data selection and preference margin prediction. Building upon the human preferences in representations, we achieve strong performance in both tasks, significantly outperforming baselines.
doi_str_mv 10.48550/arxiv.2410.04503
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_04503</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_04503</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_045033</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGJiYGhhzMnj6BHkEWCn4pCYW5WXmpSsEpRYUpRan5pUklmTm5xUrpOUXKXiU5ibmKQQUpaalFqXmJacWK5RlJiLxFQISM4uKeRhY0xJzilN5oTQ3g7yba4izhy7Y0viCoszcxKLKeJDl8WDLjQmrAADsYzmM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>LRHP: Learning Representations for Human Preferences via Preference Pairs</title><source>arXiv.org</source><creator>Wang, Chenglong ; Gan, Yang ; Huo, Yifu ; Mu, Yongyu ; He, Qiaozhi ; Yang, Murun ; Xiao, Tong ; Zhang, Chunliang ; Liu, Tongran ; Zhu, Jingbo</creator><creatorcontrib>Wang, Chenglong ; Gan, Yang ; Huo, Yifu ; Mu, Yongyu ; He, Qiaozhi ; Yang, Murun ; Xiao, Tong ; Zhang, Chunliang ; Liu, Tongran ; Zhu, Jingbo</creatorcontrib><description>To improve human-preference alignment training, current research has developed numerous preference datasets consisting of preference pairs labeled as "preferred" or "dispreferred". These preference pairs are typically used to encode human preferences into a single numerical value through reward modeling, which acts as a reward signal during reinforcement learning from human feedback (RLHF). However, representing these human preferences as a numerical value complicates the analysis of these preferences and restricts their broader applications other than RLHF. In contrast, in this work, we introduce a preference representation learning task that aims to construct a richer and more structured representation of human preferences. We further develop a more generalizable framework, Learning Representations for Human Preferences via preference pairs (namely LRHP), which extends beyond traditional reward modeling to tackle this task. We verify the utility of preference representations in two downstream tasks: preference data selection and preference margin prediction. Building upon the human preferences in representations, we achieve strong performance in both tasks, significantly outperforming baselines.</description><identifier>DOI: 10.48550/arxiv.2410.04503</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.04503$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.04503$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Chenglong</creatorcontrib><creatorcontrib>Gan, Yang</creatorcontrib><creatorcontrib>Huo, Yifu</creatorcontrib><creatorcontrib>Mu, Yongyu</creatorcontrib><creatorcontrib>He, Qiaozhi</creatorcontrib><creatorcontrib>Yang, Murun</creatorcontrib><creatorcontrib>Xiao, Tong</creatorcontrib><creatorcontrib>Zhang, Chunliang</creatorcontrib><creatorcontrib>Liu, Tongran</creatorcontrib><creatorcontrib>Zhu, Jingbo</creatorcontrib><title>LRHP: Learning Representations for Human Preferences via Preference Pairs</title><description>To improve human-preference alignment training, current research has developed numerous preference datasets consisting of preference pairs labeled as "preferred" or "dispreferred". These preference pairs are typically used to encode human preferences into a single numerical value through reward modeling, which acts as a reward signal during reinforcement learning from human feedback (RLHF). However, representing these human preferences as a numerical value complicates the analysis of these preferences and restricts their broader applications other than RLHF. In contrast, in this work, we introduce a preference representation learning task that aims to construct a richer and more structured representation of human preferences. We further develop a more generalizable framework, Learning Representations for Human Preferences via preference pairs (namely LRHP), which extends beyond traditional reward modeling to tackle this task. We verify the utility of preference representations in two downstream tasks: preference data selection and preference margin prediction. Building upon the human preferences in representations, we achieve strong performance in both tasks, significantly outperforming baselines.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGJiYGhhzMnj6BHkEWCn4pCYW5WXmpSsEpRYUpRan5pUklmTm5xUrpOUXKXiU5ibmKQQUpaalFqXmJacWK5RlJiLxFQISM4uKeRhY0xJzilN5oTQ3g7yba4izhy7Y0viCoszcxKLKeJDl8WDLjQmrAADsYzmM</recordid><startdate>20241006</startdate><enddate>20241006</enddate><creator>Wang, Chenglong</creator><creator>Gan, Yang</creator><creator>Huo, Yifu</creator><creator>Mu, Yongyu</creator><creator>He, Qiaozhi</creator><creator>Yang, Murun</creator><creator>Xiao, Tong</creator><creator>Zhang, Chunliang</creator><creator>Liu, Tongran</creator><creator>Zhu, Jingbo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241006</creationdate><title>LRHP: Learning Representations for Human Preferences via Preference Pairs</title><author>Wang, Chenglong ; Gan, Yang ; Huo, Yifu ; Mu, Yongyu ; He, Qiaozhi ; Yang, Murun ; Xiao, Tong ; Zhang, Chunliang ; Liu, Tongran ; Zhu, Jingbo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_045033</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Chenglong</creatorcontrib><creatorcontrib>Gan, Yang</creatorcontrib><creatorcontrib>Huo, Yifu</creatorcontrib><creatorcontrib>Mu, Yongyu</creatorcontrib><creatorcontrib>He, Qiaozhi</creatorcontrib><creatorcontrib>Yang, Murun</creatorcontrib><creatorcontrib>Xiao, Tong</creatorcontrib><creatorcontrib>Zhang, Chunliang</creatorcontrib><creatorcontrib>Liu, Tongran</creatorcontrib><creatorcontrib>Zhu, Jingbo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Chenglong</au><au>Gan, Yang</au><au>Huo, Yifu</au><au>Mu, Yongyu</au><au>He, Qiaozhi</au><au>Yang, Murun</au><au>Xiao, Tong</au><au>Zhang, Chunliang</au><au>Liu, Tongran</au><au>Zhu, Jingbo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>LRHP: Learning Representations for Human Preferences via Preference Pairs</atitle><date>2024-10-06</date><risdate>2024</risdate><abstract>To improve human-preference alignment training, current research has developed numerous preference datasets consisting of preference pairs labeled as "preferred" or "dispreferred". These preference pairs are typically used to encode human preferences into a single numerical value through reward modeling, which acts as a reward signal during reinforcement learning from human feedback (RLHF). However, representing these human preferences as a numerical value complicates the analysis of these preferences and restricts their broader applications other than RLHF. In contrast, in this work, we introduce a preference representation learning task that aims to construct a richer and more structured representation of human preferences. We further develop a more generalizable framework, Learning Representations for Human Preferences via preference pairs (namely LRHP), which extends beyond traditional reward modeling to tackle this task. We verify the utility of preference representations in two downstream tasks: preference data selection and preference margin prediction. Building upon the human preferences in representations, we achieve strong performance in both tasks, significantly outperforming baselines.</abstract><doi>10.48550/arxiv.2410.04503</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.04503
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_04503
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title LRHP: Learning Representations for Human Preferences via Preference Pairs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T02%3A04%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=LRHP:%20Learning%20Representations%20for%20Human%20Preferences%20via%20Preference%20Pairs&rft.au=Wang,%20Chenglong&rft.date=2024-10-06&rft_id=info:doi/10.48550/arxiv.2410.04503&rft_dat=%3Carxiv_GOX%3E2410_04503%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true