Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment
Reinforcement Learning from Human Feedback (RLHF) has proven highly effective in aligning Large Language Models (LLMs) with human preferences. However, the original RLHF typically optimizes under an overall reward, which can lead to a suboptimal learning process. This limitation stems from RLHF'...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-12 |
---|---|
Hauptverfasser: | , , , , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Li, Yanshi Xiong, Shaopan Chen, Gengru Li, Xiaoyang Luo, Yijia Zhang, Xingyao Huang, Yanhui Bu, Xingyuan Tan, Yingshui Yuan, Chun Wang, Jiamang Su, Wenbo Zheng, Bo |
description | Reinforcement Learning from Human Feedback (RLHF) has proven highly effective in aligning Large Language Models (LLMs) with human preferences. However, the original RLHF typically optimizes under an overall reward, which can lead to a suboptimal learning process. This limitation stems from RLHF's lack of awareness regarding which specific tokens should be reinforced or suppressed. Moreover, conflicts in supervision can arise, for instance, when a chosen response includes erroneous tokens, while a rejected response contains accurate elements. To rectify these shortcomings, increasing dense reward methods, such as step-wise and token-wise RLHF, have been proposed. However, these existing methods are limited to specific tasks (like mathematics). In this paper, we propose the ``Adaptive Message-wise RLHF'' method, which robustly applies to various tasks. By defining pivot tokens as key indicators, our approach adaptively identifies essential information and converts sequence-level supervision into fine-grained, subsequence-level supervision. This aligns the density of rewards and action spaces more closely with the information density of the input. Experiments demonstrate that our method can be integrated into various training methods, significantly mitigating hallucinations and catastrophic forgetting problems, while outperforming other methods on multiple evaluation metrics. Our method improves the success rate on adversarial samples by 10\% compared to the sample-wise approach, and achieves a 1.3\% improvement on evaluation benchmarks such as MMLU, GSM8K, HumanEval, etc. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3124188059</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3124188059</sourcerecordid><originalsourceid>FETCH-proquest_journals_31241880593</originalsourceid><addsrcrecordid>eNqNy70OgjAUhuHGxESi3MNJnEmgBUU3_J_9GZxIQ49Yggdsi9y-DF6A0zc83ztiHhciCtKY8wnzra3CMOSLJU8S4bF7pmTr9Adhh2QRzthLo9ZwI4XGOklKUwnuiXCULWzQ9YgEWeF0QzDoL4BLKwsEPVCtS3ohuRkbP2Rt0f_tlM0P--v2FLSmeXdoXV41naGBchHxOErTMFmJ_15fV11BPQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3124188059</pqid></control><display><type>article</type><title>Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment</title><source>Free E- Journals</source><creator>Li, Yanshi ; Xiong, Shaopan ; Chen, Gengru ; Li, Xiaoyang ; Luo, Yijia ; Zhang, Xingyao ; Huang, Yanhui ; Bu, Xingyuan ; Tan, Yingshui ; Yuan, Chun ; Wang, Jiamang ; Su, Wenbo ; Zheng, Bo</creator><creatorcontrib>Li, Yanshi ; Xiong, Shaopan ; Chen, Gengru ; Li, Xiaoyang ; Luo, Yijia ; Zhang, Xingyao ; Huang, Yanhui ; Bu, Xingyuan ; Tan, Yingshui ; Yuan, Chun ; Wang, Jiamang ; Su, Wenbo ; Zheng, Bo</creatorcontrib><description>Reinforcement Learning from Human Feedback (RLHF) has proven highly effective in aligning Large Language Models (LLMs) with human preferences. However, the original RLHF typically optimizes under an overall reward, which can lead to a suboptimal learning process. This limitation stems from RLHF's lack of awareness regarding which specific tokens should be reinforced or suppressed. Moreover, conflicts in supervision can arise, for instance, when a chosen response includes erroneous tokens, while a rejected response contains accurate elements. To rectify these shortcomings, increasing dense reward methods, such as step-wise and token-wise RLHF, have been proposed. However, these existing methods are limited to specific tasks (like mathematics). In this paper, we propose the ``Adaptive Message-wise RLHF'' method, which robustly applies to various tasks. By defining pivot tokens as key indicators, our approach adaptively identifies essential information and converts sequence-level supervision into fine-grained, subsequence-level supervision. This aligns the density of rewards and action spaces more closely with the information density of the input. Experiments demonstrate that our method can be integrated into various training methods, significantly mitigating hallucinations and catastrophic forgetting problems, while outperforming other methods on multiple evaluation metrics. Our method improves the success rate on adversarial samples by 10\% compared to the sample-wise approach, and achieves a 1.3\% improvement on evaluation benchmarks such as MMLU, GSM8K, HumanEval, etc.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive sampling ; Density ; Large language models ; Supervision</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Li, Yanshi</creatorcontrib><creatorcontrib>Xiong, Shaopan</creatorcontrib><creatorcontrib>Chen, Gengru</creatorcontrib><creatorcontrib>Li, Xiaoyang</creatorcontrib><creatorcontrib>Luo, Yijia</creatorcontrib><creatorcontrib>Zhang, Xingyao</creatorcontrib><creatorcontrib>Huang, Yanhui</creatorcontrib><creatorcontrib>Bu, Xingyuan</creatorcontrib><creatorcontrib>Tan, Yingshui</creatorcontrib><creatorcontrib>Yuan, Chun</creatorcontrib><creatorcontrib>Wang, Jiamang</creatorcontrib><creatorcontrib>Su, Wenbo</creatorcontrib><creatorcontrib>Zheng, Bo</creatorcontrib><title>Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment</title><title>arXiv.org</title><description>Reinforcement Learning from Human Feedback (RLHF) has proven highly effective in aligning Large Language Models (LLMs) with human preferences. However, the original RLHF typically optimizes under an overall reward, which can lead to a suboptimal learning process. This limitation stems from RLHF's lack of awareness regarding which specific tokens should be reinforced or suppressed. Moreover, conflicts in supervision can arise, for instance, when a chosen response includes erroneous tokens, while a rejected response contains accurate elements. To rectify these shortcomings, increasing dense reward methods, such as step-wise and token-wise RLHF, have been proposed. However, these existing methods are limited to specific tasks (like mathematics). In this paper, we propose the ``Adaptive Message-wise RLHF'' method, which robustly applies to various tasks. By defining pivot tokens as key indicators, our approach adaptively identifies essential information and converts sequence-level supervision into fine-grained, subsequence-level supervision. This aligns the density of rewards and action spaces more closely with the information density of the input. Experiments demonstrate that our method can be integrated into various training methods, significantly mitigating hallucinations and catastrophic forgetting problems, while outperforming other methods on multiple evaluation metrics. Our method improves the success rate on adversarial samples by 10\% compared to the sample-wise approach, and achieves a 1.3\% improvement on evaluation benchmarks such as MMLU, GSM8K, HumanEval, etc.</description><subject>Adaptive sampling</subject><subject>Density</subject><subject>Large language models</subject><subject>Supervision</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNy70OgjAUhuHGxESi3MNJnEmgBUU3_J_9GZxIQ49Yggdsi9y-DF6A0zc83ztiHhciCtKY8wnzra3CMOSLJU8S4bF7pmTr9Adhh2QRzthLo9ZwI4XGOklKUwnuiXCULWzQ9YgEWeF0QzDoL4BLKwsEPVCtS3ohuRkbP2Rt0f_tlM0P--v2FLSmeXdoXV41naGBchHxOErTMFmJ_15fV11BPQ</recordid><startdate>20241204</startdate><enddate>20241204</enddate><creator>Li, Yanshi</creator><creator>Xiong, Shaopan</creator><creator>Chen, Gengru</creator><creator>Li, Xiaoyang</creator><creator>Luo, Yijia</creator><creator>Zhang, Xingyao</creator><creator>Huang, Yanhui</creator><creator>Bu, Xingyuan</creator><creator>Tan, Yingshui</creator><creator>Yuan, Chun</creator><creator>Wang, Jiamang</creator><creator>Su, Wenbo</creator><creator>Zheng, Bo</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241204</creationdate><title>Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment</title><author>Li, Yanshi ; Xiong, Shaopan ; Chen, Gengru ; Li, Xiaoyang ; Luo, Yijia ; Zhang, Xingyao ; Huang, Yanhui ; Bu, Xingyuan ; Tan, Yingshui ; Yuan, Chun ; Wang, Jiamang ; Su, Wenbo ; Zheng, Bo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31241880593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptive sampling</topic><topic>Density</topic><topic>Large language models</topic><topic>Supervision</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Yanshi</creatorcontrib><creatorcontrib>Xiong, Shaopan</creatorcontrib><creatorcontrib>Chen, Gengru</creatorcontrib><creatorcontrib>Li, Xiaoyang</creatorcontrib><creatorcontrib>Luo, Yijia</creatorcontrib><creatorcontrib>Zhang, Xingyao</creatorcontrib><creatorcontrib>Huang, Yanhui</creatorcontrib><creatorcontrib>Bu, Xingyuan</creatorcontrib><creatorcontrib>Tan, Yingshui</creatorcontrib><creatorcontrib>Yuan, Chun</creatorcontrib><creatorcontrib>Wang, Jiamang</creatorcontrib><creatorcontrib>Su, Wenbo</creatorcontrib><creatorcontrib>Zheng, Bo</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Yanshi</au><au>Xiong, Shaopan</au><au>Chen, Gengru</au><au>Li, Xiaoyang</au><au>Luo, Yijia</au><au>Zhang, Xingyao</au><au>Huang, Yanhui</au><au>Bu, Xingyuan</au><au>Tan, Yingshui</au><au>Yuan, Chun</au><au>Wang, Jiamang</au><au>Su, Wenbo</au><au>Zheng, Bo</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment</atitle><jtitle>arXiv.org</jtitle><date>2024-12-04</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Reinforcement Learning from Human Feedback (RLHF) has proven highly effective in aligning Large Language Models (LLMs) with human preferences. However, the original RLHF typically optimizes under an overall reward, which can lead to a suboptimal learning process. This limitation stems from RLHF's lack of awareness regarding which specific tokens should be reinforced or suppressed. Moreover, conflicts in supervision can arise, for instance, when a chosen response includes erroneous tokens, while a rejected response contains accurate elements. To rectify these shortcomings, increasing dense reward methods, such as step-wise and token-wise RLHF, have been proposed. However, these existing methods are limited to specific tasks (like mathematics). In this paper, we propose the ``Adaptive Message-wise RLHF'' method, which robustly applies to various tasks. By defining pivot tokens as key indicators, our approach adaptively identifies essential information and converts sequence-level supervision into fine-grained, subsequence-level supervision. This aligns the density of rewards and action spaces more closely with the information density of the input. Experiments demonstrate that our method can be integrated into various training methods, significantly mitigating hallucinations and catastrophic forgetting problems, while outperforming other methods on multiple evaluation metrics. Our method improves the success rate on adversarial samples by 10\% compared to the sample-wise approach, and achieves a 1.3\% improvement on evaluation benchmarks such as MMLU, GSM8K, HumanEval, etc.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3124188059 |
source | Free E- Journals |
subjects | Adaptive sampling Density Large language models Supervision |
title | Adaptive Dense Reward: Understanding the Gap Between Action and Reward Space in Alignment |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T06%3A43%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Adaptive%20Dense%20Reward:%20Understanding%20the%20Gap%20Between%20Action%20and%20Reward%20Space%20in%20Alignment&rft.jtitle=arXiv.org&rft.au=Li,%20Yanshi&rft.date=2024-12-04&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3124188059%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3124188059&rft_id=info:pmid/&rfr_iscdi=true |