Training Language Models to Self-Correct via Reinforcement Learning

Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Current methods for training self-correction typically depend on either multiple models, a more advanced model, or additional forms of superv...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kumar, Aviral, Zhuang, Vincent, Agarwal, Rishabh, Su, Yi, Co-Reyes, John D, Singh, Avi, Baumli, Kate, Iqbal, Shariq, Bishop, Colton, Roelofs, Rebecca, Zhang, Lei M, McKinney, Kay, Shrivastava, Disha, Paduraru, Cosmin, Tucker, George, Precup, Doina, Behbahani, Feryal, Faust, Aleksandra
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kumar, Aviral
Zhuang, Vincent
Agarwal, Rishabh
Su, Yi
Co-Reyes, John D
Singh, Avi
Baumli, Kate
Iqbal, Shariq
Bishop, Colton
Roelofs, Rebecca
Zhang, Lei M
McKinney, Kay
Shrivastava, Disha
Paduraru, Cosmin
Tucker, George
Precup, Doina
Behbahani, Feryal
Faust, Aleksandra
description Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Current methods for training self-correction typically depend on either multiple models, a more advanced model, or additional forms of supervision. To address these shortcomings, we develop a multi-turn online reinforcement learning (RL) approach, SCoRe, that significantly improves an LLM's self-correction ability using entirely self-generated data. To build SCoRe, we first show that variants of supervised fine-tuning (SFT) on offline model-generated correction traces are often insufficient for instilling self-correction behavior. In particular, we observe that training via SFT falls prey to either a distribution mismatch between mistakes made by the data-collection policy and the model's own responses, or to behavior collapse, where learning implicitly prefers only a certain mode of correction behavior that is often not effective at self-correction on test problems. SCoRe addresses these challenges by training under the model's own distribution of self-generated correction traces and using appropriate regularization to steer the learning process into learning a self-correction behavior that is effective at test time as opposed to fitting high-reward responses for a given prompt. This regularization process includes an initial phase of multi-turn RL on a base model to generate a policy initialization that is less susceptible to collapse, followed by using a reward bonus to amplify self-correction. With Gemini 1.0 Pro and 1.5 Flash models, we find that SCoRe achieves state-of-the-art self-correction performance, improving the base models' self-correction by 15.6% and 9.1% respectively on MATH and HumanEval.
doi_str_mv 10.48550/arxiv.2409.12917
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2409_12917</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2409_12917</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2409_129173</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0sjQ052RwDilKzMzLzEtX8EnMSy9NTE9V8M1PSc0pVijJVwhOzUnTdc4vKkpNLlEoy0xUCErNzEvLL0pOzU3NK1HwSU0sAmnlYWBNS8wpTuWF0twM8m6uIc4eumDr4guKMnMTiyrjQdbGg601JqwCAJd6N7E</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Training Language Models to Self-Correct via Reinforcement Learning</title><source>arXiv.org</source><creator>Kumar, Aviral ; Zhuang, Vincent ; Agarwal, Rishabh ; Su, Yi ; Co-Reyes, John D ; Singh, Avi ; Baumli, Kate ; Iqbal, Shariq ; Bishop, Colton ; Roelofs, Rebecca ; Zhang, Lei M ; McKinney, Kay ; Shrivastava, Disha ; Paduraru, Cosmin ; Tucker, George ; Precup, Doina ; Behbahani, Feryal ; Faust, Aleksandra</creator><creatorcontrib>Kumar, Aviral ; Zhuang, Vincent ; Agarwal, Rishabh ; Su, Yi ; Co-Reyes, John D ; Singh, Avi ; Baumli, Kate ; Iqbal, Shariq ; Bishop, Colton ; Roelofs, Rebecca ; Zhang, Lei M ; McKinney, Kay ; Shrivastava, Disha ; Paduraru, Cosmin ; Tucker, George ; Precup, Doina ; Behbahani, Feryal ; Faust, Aleksandra</creatorcontrib><description>Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Current methods for training self-correction typically depend on either multiple models, a more advanced model, or additional forms of supervision. To address these shortcomings, we develop a multi-turn online reinforcement learning (RL) approach, SCoRe, that significantly improves an LLM's self-correction ability using entirely self-generated data. To build SCoRe, we first show that variants of supervised fine-tuning (SFT) on offline model-generated correction traces are often insufficient for instilling self-correction behavior. In particular, we observe that training via SFT falls prey to either a distribution mismatch between mistakes made by the data-collection policy and the model's own responses, or to behavior collapse, where learning implicitly prefers only a certain mode of correction behavior that is often not effective at self-correction on test problems. SCoRe addresses these challenges by training under the model's own distribution of self-generated correction traces and using appropriate regularization to steer the learning process into learning a self-correction behavior that is effective at test time as opposed to fitting high-reward responses for a given prompt. This regularization process includes an initial phase of multi-turn RL on a base model to generate a policy initialization that is less susceptible to collapse, followed by using a reward bonus to amplify self-correction. With Gemini 1.0 Pro and 1.5 Flash models, we find that SCoRe achieves state-of-the-art self-correction performance, improving the base models' self-correction by 15.6% and 9.1% respectively on MATH and HumanEval.</description><identifier>DOI: 10.48550/arxiv.2409.12917</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2024-09</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2409.12917$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2409.12917$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kumar, Aviral</creatorcontrib><creatorcontrib>Zhuang, Vincent</creatorcontrib><creatorcontrib>Agarwal, Rishabh</creatorcontrib><creatorcontrib>Su, Yi</creatorcontrib><creatorcontrib>Co-Reyes, John D</creatorcontrib><creatorcontrib>Singh, Avi</creatorcontrib><creatorcontrib>Baumli, Kate</creatorcontrib><creatorcontrib>Iqbal, Shariq</creatorcontrib><creatorcontrib>Bishop, Colton</creatorcontrib><creatorcontrib>Roelofs, Rebecca</creatorcontrib><creatorcontrib>Zhang, Lei M</creatorcontrib><creatorcontrib>McKinney, Kay</creatorcontrib><creatorcontrib>Shrivastava, Disha</creatorcontrib><creatorcontrib>Paduraru, Cosmin</creatorcontrib><creatorcontrib>Tucker, George</creatorcontrib><creatorcontrib>Precup, Doina</creatorcontrib><creatorcontrib>Behbahani, Feryal</creatorcontrib><creatorcontrib>Faust, Aleksandra</creatorcontrib><title>Training Language Models to Self-Correct via Reinforcement Learning</title><description>Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Current methods for training self-correction typically depend on either multiple models, a more advanced model, or additional forms of supervision. To address these shortcomings, we develop a multi-turn online reinforcement learning (RL) approach, SCoRe, that significantly improves an LLM's self-correction ability using entirely self-generated data. To build SCoRe, we first show that variants of supervised fine-tuning (SFT) on offline model-generated correction traces are often insufficient for instilling self-correction behavior. In particular, we observe that training via SFT falls prey to either a distribution mismatch between mistakes made by the data-collection policy and the model's own responses, or to behavior collapse, where learning implicitly prefers only a certain mode of correction behavior that is often not effective at self-correction on test problems. SCoRe addresses these challenges by training under the model's own distribution of self-generated correction traces and using appropriate regularization to steer the learning process into learning a self-correction behavior that is effective at test time as opposed to fitting high-reward responses for a given prompt. This regularization process includes an initial phase of multi-turn RL on a base model to generate a policy initialization that is less susceptible to collapse, followed by using a reward bonus to amplify self-correction. With Gemini 1.0 Pro and 1.5 Flash models, we find that SCoRe achieves state-of-the-art self-correction performance, improving the base models' self-correction by 15.6% and 9.1% respectively on MATH and HumanEval.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw1DM0sjQ052RwDilKzMzLzEtX8EnMSy9NTE9V8M1PSc0pVijJVwhOzUnTdc4vKkpNLlEoy0xUCErNzEvLL0pOzU3NK1HwSU0sAmnlYWBNS8wpTuWF0twM8m6uIc4eumDr4guKMnMTiyrjQdbGg601JqwCAJd6N7E</recordid><startdate>20240919</startdate><enddate>20240919</enddate><creator>Kumar, Aviral</creator><creator>Zhuang, Vincent</creator><creator>Agarwal, Rishabh</creator><creator>Su, Yi</creator><creator>Co-Reyes, John D</creator><creator>Singh, Avi</creator><creator>Baumli, Kate</creator><creator>Iqbal, Shariq</creator><creator>Bishop, Colton</creator><creator>Roelofs, Rebecca</creator><creator>Zhang, Lei M</creator><creator>McKinney, Kay</creator><creator>Shrivastava, Disha</creator><creator>Paduraru, Cosmin</creator><creator>Tucker, George</creator><creator>Precup, Doina</creator><creator>Behbahani, Feryal</creator><creator>Faust, Aleksandra</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240919</creationdate><title>Training Language Models to Self-Correct via Reinforcement Learning</title><author>Kumar, Aviral ; Zhuang, Vincent ; Agarwal, Rishabh ; Su, Yi ; Co-Reyes, John D ; Singh, Avi ; Baumli, Kate ; Iqbal, Shariq ; Bishop, Colton ; Roelofs, Rebecca ; Zhang, Lei M ; McKinney, Kay ; Shrivastava, Disha ; Paduraru, Cosmin ; Tucker, George ; Precup, Doina ; Behbahani, Feryal ; Faust, Aleksandra</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2409_129173</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Kumar, Aviral</creatorcontrib><creatorcontrib>Zhuang, Vincent</creatorcontrib><creatorcontrib>Agarwal, Rishabh</creatorcontrib><creatorcontrib>Su, Yi</creatorcontrib><creatorcontrib>Co-Reyes, John D</creatorcontrib><creatorcontrib>Singh, Avi</creatorcontrib><creatorcontrib>Baumli, Kate</creatorcontrib><creatorcontrib>Iqbal, Shariq</creatorcontrib><creatorcontrib>Bishop, Colton</creatorcontrib><creatorcontrib>Roelofs, Rebecca</creatorcontrib><creatorcontrib>Zhang, Lei M</creatorcontrib><creatorcontrib>McKinney, Kay</creatorcontrib><creatorcontrib>Shrivastava, Disha</creatorcontrib><creatorcontrib>Paduraru, Cosmin</creatorcontrib><creatorcontrib>Tucker, George</creatorcontrib><creatorcontrib>Precup, Doina</creatorcontrib><creatorcontrib>Behbahani, Feryal</creatorcontrib><creatorcontrib>Faust, Aleksandra</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kumar, Aviral</au><au>Zhuang, Vincent</au><au>Agarwal, Rishabh</au><au>Su, Yi</au><au>Co-Reyes, John D</au><au>Singh, Avi</au><au>Baumli, Kate</au><au>Iqbal, Shariq</au><au>Bishop, Colton</au><au>Roelofs, Rebecca</au><au>Zhang, Lei M</au><au>McKinney, Kay</au><au>Shrivastava, Disha</au><au>Paduraru, Cosmin</au><au>Tucker, George</au><au>Precup, Doina</au><au>Behbahani, Feryal</au><au>Faust, Aleksandra</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Training Language Models to Self-Correct via Reinforcement Learning</atitle><date>2024-09-19</date><risdate>2024</risdate><abstract>Self-correction is a highly desirable capability of large language models (LLMs), yet it has consistently been found to be largely ineffective in modern LLMs. Current methods for training self-correction typically depend on either multiple models, a more advanced model, or additional forms of supervision. To address these shortcomings, we develop a multi-turn online reinforcement learning (RL) approach, SCoRe, that significantly improves an LLM's self-correction ability using entirely self-generated data. To build SCoRe, we first show that variants of supervised fine-tuning (SFT) on offline model-generated correction traces are often insufficient for instilling self-correction behavior. In particular, we observe that training via SFT falls prey to either a distribution mismatch between mistakes made by the data-collection policy and the model's own responses, or to behavior collapse, where learning implicitly prefers only a certain mode of correction behavior that is often not effective at self-correction on test problems. SCoRe addresses these challenges by training under the model's own distribution of self-generated correction traces and using appropriate regularization to steer the learning process into learning a self-correction behavior that is effective at test time as opposed to fitting high-reward responses for a given prompt. This regularization process includes an initial phase of multi-turn RL on a base model to generate a policy initialization that is less susceptible to collapse, followed by using a reward bonus to amplify self-correction. With Gemini 1.0 Pro and 1.5 Flash models, we find that SCoRe achieves state-of-the-art self-correction performance, improving the base models' self-correction by 15.6% and 9.1% respectively on MATH and HumanEval.</abstract><doi>10.48550/arxiv.2409.12917</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2409.12917
ispartof
issn
language eng
recordid cdi_arxiv_primary_2409_12917
source arXiv.org
subjects Computer Science - Learning
title Training Language Models to Self-Correct via Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T18%3A35%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Training%20Language%20Models%20to%20Self-Correct%20via%20Reinforcement%20Learning&rft.au=Kumar,%20Aviral&rft.date=2024-09-19&rft_id=info:doi/10.48550/arxiv.2409.12917&rft_dat=%3Carxiv_GOX%3E2409_12917%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true