Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models

Automatic code generation has been a longstanding research topic. With the advancement of general-purpose large language models (LLMs), the ability to code stands out as one important measure to the model's reasoning performance. Usually, a two-stage training paradigm is implemented to obtain a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-06
Hauptverfasser: Chen, Jie, Han, Xintian, Ma, Yu, Zhou, Xun, Liang Xiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Chen, Jie
Han, Xintian
Ma, Yu
Zhou, Xun
Liang Xiang
description Automatic code generation has been a longstanding research topic. With the advancement of general-purpose large language models (LLMs), the ability to code stands out as one important measure to the model's reasoning performance. Usually, a two-stage training paradigm is implemented to obtain a Code LLM, namely the pretraining and the fine-tuning. Within the fine-tuning, supervised fine-tuning (SFT), and reinforcement learning (RL) are often used to improve the model's zero-shot ability. A large number of work has been conducted to improve the model's performance on code-related benchmarks with either modifications to the algorithm or refinement of the dataset. However, we still lack a deep insight into the correlation between SFT and RL. For instance, what kind of dataset should be used to ensure generalization, or what if we abandon the SFT phase in fine-tuning. In this work, we make an attempt to understand the correlation between SFT and RL. To facilitate our research, we manually craft 100 basis python functions, called atomic functions, and then a synthesizing pipeline is deployed to create a large number of synthetic functions on top of the atomic ones. In this manner, we ensure that the train and test sets remain distinct, preventing data contamination. Through comprehensive ablation study, we find: (1) Both atomic and synthetic functions are indispensable for SFT's generalization, and only a handful of synthetic functions are adequate; (2) Through RL, the SFT's generalization to target domain can be greatly enhanced, even with the same training prompts; (3) Training RL from scratch can alleviate the over-fitting issue introduced in the SFT phase.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3069346393</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3069346393</sourcerecordid><originalsourceid>FETCH-proquest_journals_30693463933</originalsourceid><addsrcrecordid>eNqNisEOwUAURScSCcE_vMS6Sc1QrBuNBRtq3Qx9aqg39WaG31fiA2zuPbnndkRfKjWJFlMpe2Lk3DWOY5nM5Wym-uJ1oNqebuAvCKllxlp7YwmO6F-IBPvQID-NwxIyQxjlgQxVoKmEHRo6Wz7hHcnDBjV_lSHIWZsvp7ZE2GiuPklV0C1s2612Q9E969rh6NcDMc5WebqOGraPgM4XVxuYWlWoOFmqaaKWSv33egMVfk00</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3069346393</pqid></control><display><type>article</type><title>Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models</title><source>Free E- Journals</source><creator>Chen, Jie ; Han, Xintian ; Ma, Yu ; Zhou, Xun ; Liang Xiang</creator><creatorcontrib>Chen, Jie ; Han, Xintian ; Ma, Yu ; Zhou, Xun ; Liang Xiang</creatorcontrib><description>Automatic code generation has been a longstanding research topic. With the advancement of general-purpose large language models (LLMs), the ability to code stands out as one important measure to the model's reasoning performance. Usually, a two-stage training paradigm is implemented to obtain a Code LLM, namely the pretraining and the fine-tuning. Within the fine-tuning, supervised fine-tuning (SFT), and reinforcement learning (RL) are often used to improve the model's zero-shot ability. A large number of work has been conducted to improve the model's performance on code-related benchmarks with either modifications to the algorithm or refinement of the dataset. However, we still lack a deep insight into the correlation between SFT and RL. For instance, what kind of dataset should be used to ensure generalization, or what if we abandon the SFT phase in fine-tuning. In this work, we make an attempt to understand the correlation between SFT and RL. To facilitate our research, we manually craft 100 basis python functions, called atomic functions, and then a synthesizing pipeline is deployed to create a large number of synthetic functions on top of the atomic ones. In this manner, we ensure that the train and test sets remain distinct, preventing data contamination. Through comprehensive ablation study, we find: (1) Both atomic and synthetic functions are indispensable for SFT's generalization, and only a handful of synthetic functions are adequate; (2) Through RL, the SFT's generalization to target domain can be greatly enhanced, even with the same training prompts; (3) Training RL from scratch can alleviate the over-fitting issue introduced in the SFT phase.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Algorithms ; Atomic properties ; Correlation ; Datasets ; Large language models ; Machine learning</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Chen, Jie</creatorcontrib><creatorcontrib>Han, Xintian</creatorcontrib><creatorcontrib>Ma, Yu</creatorcontrib><creatorcontrib>Zhou, Xun</creatorcontrib><creatorcontrib>Liang Xiang</creatorcontrib><title>Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models</title><title>arXiv.org</title><description>Automatic code generation has been a longstanding research topic. With the advancement of general-purpose large language models (LLMs), the ability to code stands out as one important measure to the model's reasoning performance. Usually, a two-stage training paradigm is implemented to obtain a Code LLM, namely the pretraining and the fine-tuning. Within the fine-tuning, supervised fine-tuning (SFT), and reinforcement learning (RL) are often used to improve the model's zero-shot ability. A large number of work has been conducted to improve the model's performance on code-related benchmarks with either modifications to the algorithm or refinement of the dataset. However, we still lack a deep insight into the correlation between SFT and RL. For instance, what kind of dataset should be used to ensure generalization, or what if we abandon the SFT phase in fine-tuning. In this work, we make an attempt to understand the correlation between SFT and RL. To facilitate our research, we manually craft 100 basis python functions, called atomic functions, and then a synthesizing pipeline is deployed to create a large number of synthetic functions on top of the atomic ones. In this manner, we ensure that the train and test sets remain distinct, preventing data contamination. Through comprehensive ablation study, we find: (1) Both atomic and synthetic functions are indispensable for SFT's generalization, and only a handful of synthetic functions are adequate; (2) Through RL, the SFT's generalization to target domain can be greatly enhanced, even with the same training prompts; (3) Training RL from scratch can alleviate the over-fitting issue introduced in the SFT phase.</description><subject>Ablation</subject><subject>Algorithms</subject><subject>Atomic properties</subject><subject>Correlation</subject><subject>Datasets</subject><subject>Large language models</subject><subject>Machine learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNisEOwUAURScSCcE_vMS6Sc1QrBuNBRtq3Qx9aqg39WaG31fiA2zuPbnndkRfKjWJFlMpe2Lk3DWOY5nM5Wym-uJ1oNqebuAvCKllxlp7YwmO6F-IBPvQID-NwxIyQxjlgQxVoKmEHRo6Wz7hHcnDBjV_lSHIWZsvp7ZE2GiuPklV0C1s2612Q9E969rh6NcDMc5WebqOGraPgM4XVxuYWlWoOFmqaaKWSv33egMVfk00</recordid><startdate>20240614</startdate><enddate>20240614</enddate><creator>Chen, Jie</creator><creator>Han, Xintian</creator><creator>Ma, Yu</creator><creator>Zhou, Xun</creator><creator>Liang Xiang</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240614</creationdate><title>Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models</title><author>Chen, Jie ; Han, Xintian ; Ma, Yu ; Zhou, Xun ; Liang Xiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30693463933</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Ablation</topic><topic>Algorithms</topic><topic>Atomic properties</topic><topic>Correlation</topic><topic>Datasets</topic><topic>Large language models</topic><topic>Machine learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Jie</creatorcontrib><creatorcontrib>Han, Xintian</creatorcontrib><creatorcontrib>Ma, Yu</creatorcontrib><creatorcontrib>Zhou, Xun</creatorcontrib><creatorcontrib>Liang Xiang</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Jie</au><au>Han, Xintian</au><au>Ma, Yu</au><au>Zhou, Xun</au><au>Liang Xiang</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models</atitle><jtitle>arXiv.org</jtitle><date>2024-06-14</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Automatic code generation has been a longstanding research topic. With the advancement of general-purpose large language models (LLMs), the ability to code stands out as one important measure to the model's reasoning performance. Usually, a two-stage training paradigm is implemented to obtain a Code LLM, namely the pretraining and the fine-tuning. Within the fine-tuning, supervised fine-tuning (SFT), and reinforcement learning (RL) are often used to improve the model's zero-shot ability. A large number of work has been conducted to improve the model's performance on code-related benchmarks with either modifications to the algorithm or refinement of the dataset. However, we still lack a deep insight into the correlation between SFT and RL. For instance, what kind of dataset should be used to ensure generalization, or what if we abandon the SFT phase in fine-tuning. In this work, we make an attempt to understand the correlation between SFT and RL. To facilitate our research, we manually craft 100 basis python functions, called atomic functions, and then a synthesizing pipeline is deployed to create a large number of synthetic functions on top of the atomic ones. In this manner, we ensure that the train and test sets remain distinct, preventing data contamination. Through comprehensive ablation study, we find: (1) Both atomic and synthetic functions are indispensable for SFT's generalization, and only a handful of synthetic functions are adequate; (2) Through RL, the SFT's generalization to target domain can be greatly enhanced, even with the same training prompts; (3) Training RL from scratch can alleviate the over-fitting issue introduced in the SFT phase.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_3069346393
source Free E- Journals
subjects Ablation
Algorithms
Atomic properties
Correlation
Datasets
Large language models
Machine learning
title Unlock the Correlation between Supervised Fine-Tuning and Reinforcement Learning in Training Code Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T23%3A09%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Unlock%20the%20Correlation%20between%20Supervised%20Fine-Tuning%20and%20Reinforcement%20Learning%20in%20Training%20Code%20Large%20Language%20Models&rft.jtitle=arXiv.org&rft.au=Chen,%20Jie&rft.date=2024-06-14&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3069346393%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3069346393&rft_id=info:pmid/&rfr_iscdi=true