Towards Comprehensive Preference Data Collection for Reward Modeling
Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models (LLMs) with human preferences, thereby enhancing the quality of responses generated. A critical component of RLHF is the reward model, which is trained on preference data and outputs a scalar reward...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-06 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Hu, Yulan Li, Qingyang Ouyang, Sheng Chen, Ge Chen, Kaihui Mei, Lijun Ye, Xucheng Zhang, Fuzheng Liu, Yong |
description | Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models (LLMs) with human preferences, thereby enhancing the quality of responses generated. A critical component of RLHF is the reward model, which is trained on preference data and outputs a scalar reward during the inference stage. However, the collection of preference data still lacks thorough investigation. Recent studies indicate that preference data is collected either by AI or humans, where chosen and rejected instances are identified among pairwise responses. We question whether this process effectively filters out noise and ensures sufficient diversity in collected data. To address these concerns, for the first time, we propose a comprehensive framework for preference data collection, decomposing the process into four incremental steps: Prompt Generation, Response Generation, Response Filtering, and Human Labeling. This structured approach ensures the collection of high-quality preferences while reducing reliance on human labor. We conducted comprehensive experiments based on the data collected at different stages, demonstrating the effectiveness of the proposed data collection method. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3072059981</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3072059981</sourcerecordid><originalsourceid>FETCH-proquest_journals_30720599813</originalsourceid><addsrcrecordid>eNqNi8EKgkAUAJcgSMp_WOgsrLuZetaiSxDhXRZ91sq2z95q_X4GfUCnOczMggVSqTjKdlKuWOh9L4SQ-1QmiQpYWeFbU-t5gY-B4A7OmxfwC0EHBK4BXupRz9ZaaEaDjndI_ArfiZ-xBWvcbcOWnbYewh_XbHs8VMUpGgifE_ix7nEiN6taiVSKJM-zWP1XfQBL9jqd</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3072059981</pqid></control><display><type>article</type><title>Towards Comprehensive Preference Data Collection for Reward Modeling</title><source>Free E- Journals</source><creator>Hu, Yulan ; Li, Qingyang ; Ouyang, Sheng ; Chen, Ge ; Chen, Kaihui ; Mei, Lijun ; Ye, Xucheng ; Zhang, Fuzheng ; Liu, Yong</creator><creatorcontrib>Hu, Yulan ; Li, Qingyang ; Ouyang, Sheng ; Chen, Ge ; Chen, Kaihui ; Mei, Lijun ; Ye, Xucheng ; Zhang, Fuzheng ; Liu, Yong</creatorcontrib><description>Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models (LLMs) with human preferences, thereby enhancing the quality of responses generated. A critical component of RLHF is the reward model, which is trained on preference data and outputs a scalar reward during the inference stage. However, the collection of preference data still lacks thorough investigation. Recent studies indicate that preference data is collected either by AI or humans, where chosen and rejected instances are identified among pairwise responses. We question whether this process effectively filters out noise and ensures sufficient diversity in collected data. To address these concerns, for the first time, we propose a comprehensive framework for preference data collection, decomposing the process into four incremental steps: Prompt Generation, Response Generation, Response Filtering, and Human Labeling. This structured approach ensures the collection of high-quality preferences while reducing reliance on human labor. We conducted comprehensive experiments based on the data collected at different stages, demonstrating the effectiveness of the proposed data collection method.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Critical components ; Data collection ; Large language models ; Machine learning ; Preferences</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-sa/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Hu, Yulan</creatorcontrib><creatorcontrib>Li, Qingyang</creatorcontrib><creatorcontrib>Ouyang, Sheng</creatorcontrib><creatorcontrib>Chen, Ge</creatorcontrib><creatorcontrib>Chen, Kaihui</creatorcontrib><creatorcontrib>Mei, Lijun</creatorcontrib><creatorcontrib>Ye, Xucheng</creatorcontrib><creatorcontrib>Zhang, Fuzheng</creatorcontrib><creatorcontrib>Liu, Yong</creatorcontrib><title>Towards Comprehensive Preference Data Collection for Reward Modeling</title><title>arXiv.org</title><description>Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models (LLMs) with human preferences, thereby enhancing the quality of responses generated. A critical component of RLHF is the reward model, which is trained on preference data and outputs a scalar reward during the inference stage. However, the collection of preference data still lacks thorough investigation. Recent studies indicate that preference data is collected either by AI or humans, where chosen and rejected instances are identified among pairwise responses. We question whether this process effectively filters out noise and ensures sufficient diversity in collected data. To address these concerns, for the first time, we propose a comprehensive framework for preference data collection, decomposing the process into four incremental steps: Prompt Generation, Response Generation, Response Filtering, and Human Labeling. This structured approach ensures the collection of high-quality preferences while reducing reliance on human labor. We conducted comprehensive experiments based on the data collected at different stages, demonstrating the effectiveness of the proposed data collection method.</description><subject>Critical components</subject><subject>Data collection</subject><subject>Large language models</subject><subject>Machine learning</subject><subject>Preferences</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi8EKgkAUAJcgSMp_WOgsrLuZetaiSxDhXRZ91sq2z95q_X4GfUCnOczMggVSqTjKdlKuWOh9L4SQ-1QmiQpYWeFbU-t5gY-B4A7OmxfwC0EHBK4BXupRz9ZaaEaDjndI_ArfiZ-xBWvcbcOWnbYewh_XbHs8VMUpGgifE_ix7nEiN6taiVSKJM-zWP1XfQBL9jqd</recordid><startdate>20240624</startdate><enddate>20240624</enddate><creator>Hu, Yulan</creator><creator>Li, Qingyang</creator><creator>Ouyang, Sheng</creator><creator>Chen, Ge</creator><creator>Chen, Kaihui</creator><creator>Mei, Lijun</creator><creator>Ye, Xucheng</creator><creator>Zhang, Fuzheng</creator><creator>Liu, Yong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240624</creationdate><title>Towards Comprehensive Preference Data Collection for Reward Modeling</title><author>Hu, Yulan ; Li, Qingyang ; Ouyang, Sheng ; Chen, Ge ; Chen, Kaihui ; Mei, Lijun ; Ye, Xucheng ; Zhang, Fuzheng ; Liu, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30720599813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Critical components</topic><topic>Data collection</topic><topic>Large language models</topic><topic>Machine learning</topic><topic>Preferences</topic><toplevel>online_resources</toplevel><creatorcontrib>Hu, Yulan</creatorcontrib><creatorcontrib>Li, Qingyang</creatorcontrib><creatorcontrib>Ouyang, Sheng</creatorcontrib><creatorcontrib>Chen, Ge</creatorcontrib><creatorcontrib>Chen, Kaihui</creatorcontrib><creatorcontrib>Mei, Lijun</creatorcontrib><creatorcontrib>Ye, Xucheng</creatorcontrib><creatorcontrib>Zhang, Fuzheng</creatorcontrib><creatorcontrib>Liu, Yong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Yulan</au><au>Li, Qingyang</au><au>Ouyang, Sheng</au><au>Chen, Ge</au><au>Chen, Kaihui</au><au>Mei, Lijun</au><au>Ye, Xucheng</au><au>Zhang, Fuzheng</au><au>Liu, Yong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards Comprehensive Preference Data Collection for Reward Modeling</atitle><jtitle>arXiv.org</jtitle><date>2024-06-24</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models (LLMs) with human preferences, thereby enhancing the quality of responses generated. A critical component of RLHF is the reward model, which is trained on preference data and outputs a scalar reward during the inference stage. However, the collection of preference data still lacks thorough investigation. Recent studies indicate that preference data is collected either by AI or humans, where chosen and rejected instances are identified among pairwise responses. We question whether this process effectively filters out noise and ensures sufficient diversity in collected data. To address these concerns, for the first time, we propose a comprehensive framework for preference data collection, decomposing the process into four incremental steps: Prompt Generation, Response Generation, Response Filtering, and Human Labeling. This structured approach ensures the collection of high-quality preferences while reducing reliance on human labor. We conducted comprehensive experiments based on the data collected at different stages, demonstrating the effectiveness of the proposed data collection method.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-06 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3072059981 |
source | Free E- Journals |
subjects | Critical components Data collection Large language models Machine learning Preferences |
title | Towards Comprehensive Preference Data Collection for Reward Modeling |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T16%3A38%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20Comprehensive%20Preference%20Data%20Collection%20for%20Reward%20Modeling&rft.jtitle=arXiv.org&rft.au=Hu,%20Yulan&rft.date=2024-06-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3072059981%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3072059981&rft_id=info:pmid/&rfr_iscdi=true |