Program Synthesis with Large Language Models

This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Austin, Jacob, Odena, Augustus, Nye, Maxwell, Bosma, Maarten, Michalewski, Henryk, Dohan, David, Jiang, Ellen, Cai, Carrie, Terry, Michael, Le, Quoc, Sutton, Charles
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Austin, Jacob
Odena, Augustus
Nye, Maxwell
Bosma, Maarten
Michalewski, Henryk
Dohan, David
Jiang, Ellen
Cai, Carrie
Terry, Michael
Le, Quoc
Sutton, Charles
description This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fine-tuning regimes. Our benchmarks are designed to measure the ability of these models to synthesize short Python programs from natural language descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974 programming tasks, designed to be solvable by entry-level programmers. The MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text. On both datasets, we find that synthesis performance scales log-linearly with model size. Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59.6 percent of the problems from MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a held-out portion of the dataset improves performance by about 10 percentage points across most model sizes. On the MathQA-Python dataset, the largest fine-tuned model achieves 83.8 percent accuracy. Going further, we study the model's ability to engage in dialog about code, incorporating human feedback to improve its solutions. We find that natural language feedback from a human halves the error rate compared to the model's initial prediction. Additionally, we conduct an error analysis to shed light on where these models fall short and what types of programs are most difficult to generate. Finally, we explore the semantic grounding of these models by fine-tuning them to predict the results of program execution. We find that even our best models are generally unable to predict the output of a program given a specific input.
doi_str_mv 10.48550/arxiv.2108.07732
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2108_07732</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2108_07732</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1152-8ef6551f37e0f69588eb3d3ae27d806b2d44cfdc80491bf11de7f51cf9fd513a3</originalsourceid><addsrcrecordid>eNotzs1uwjAQBGBfeqgoD9ATeQASvHYcO0eEoEUKolLhHG3i3RCJPzlAy9tDKZeZOY0-Id5BJqkzRo4w_LaXRIF0ibRWq1cx_AqHJuAu-r7uTxvq2i76aU-bqMDQ0D33zRnvY3HwtO3exAvjtqP-s3tiPZuuJp9xsfyYT8ZFjABGxY44MwZYW5Kc5cY5qrTXSMp6J7NK-TSt2ddOpjlUDODJsoGac_YGNOqeGPz_PrzlMbQ7DNfyz10-3PoGOsI87g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Program Synthesis with Large Language Models</title><source>arXiv.org</source><creator>Austin, Jacob ; Odena, Augustus ; Nye, Maxwell ; Bosma, Maarten ; Michalewski, Henryk ; Dohan, David ; Jiang, Ellen ; Cai, Carrie ; Terry, Michael ; Le, Quoc ; Sutton, Charles</creator><creatorcontrib>Austin, Jacob ; Odena, Augustus ; Nye, Maxwell ; Bosma, Maarten ; Michalewski, Henryk ; Dohan, David ; Jiang, Ellen ; Cai, Carrie ; Terry, Michael ; Le, Quoc ; Sutton, Charles</creatorcontrib><description>This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fine-tuning regimes. Our benchmarks are designed to measure the ability of these models to synthesize short Python programs from natural language descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974 programming tasks, designed to be solvable by entry-level programmers. The MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text. On both datasets, we find that synthesis performance scales log-linearly with model size. Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59.6 percent of the problems from MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a held-out portion of the dataset improves performance by about 10 percentage points across most model sizes. On the MathQA-Python dataset, the largest fine-tuned model achieves 83.8 percent accuracy. Going further, we study the model's ability to engage in dialog about code, incorporating human feedback to improve its solutions. We find that natural language feedback from a human halves the error rate compared to the model's initial prediction. Additionally, we conduct an error analysis to shed light on where these models fall short and what types of programs are most difficult to generate. Finally, we explore the semantic grounding of these models by fine-tuning them to predict the results of program execution. We find that even our best models are generally unable to predict the output of a program given a specific input.</description><identifier>DOI: 10.48550/arxiv.2108.07732</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Programming Languages</subject><creationdate>2021-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1152-8ef6551f37e0f69588eb3d3ae27d806b2d44cfdc80491bf11de7f51cf9fd513a3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2108.07732$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2108.07732$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Austin, Jacob</creatorcontrib><creatorcontrib>Odena, Augustus</creatorcontrib><creatorcontrib>Nye, Maxwell</creatorcontrib><creatorcontrib>Bosma, Maarten</creatorcontrib><creatorcontrib>Michalewski, Henryk</creatorcontrib><creatorcontrib>Dohan, David</creatorcontrib><creatorcontrib>Jiang, Ellen</creatorcontrib><creatorcontrib>Cai, Carrie</creatorcontrib><creatorcontrib>Terry, Michael</creatorcontrib><creatorcontrib>Le, Quoc</creatorcontrib><creatorcontrib>Sutton, Charles</creatorcontrib><title>Program Synthesis with Large Language Models</title><description>This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fine-tuning regimes. Our benchmarks are designed to measure the ability of these models to synthesize short Python programs from natural language descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974 programming tasks, designed to be solvable by entry-level programmers. The MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text. On both datasets, we find that synthesis performance scales log-linearly with model size. Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59.6 percent of the problems from MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a held-out portion of the dataset improves performance by about 10 percentage points across most model sizes. On the MathQA-Python dataset, the largest fine-tuned model achieves 83.8 percent accuracy. Going further, we study the model's ability to engage in dialog about code, incorporating human feedback to improve its solutions. We find that natural language feedback from a human halves the error rate compared to the model's initial prediction. Additionally, we conduct an error analysis to shed light on where these models fall short and what types of programs are most difficult to generate. Finally, we explore the semantic grounding of these models by fine-tuning them to predict the results of program execution. We find that even our best models are generally unable to predict the output of a program given a specific input.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Programming Languages</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzs1uwjAQBGBfeqgoD9ATeQASvHYcO0eEoEUKolLhHG3i3RCJPzlAy9tDKZeZOY0-Id5BJqkzRo4w_LaXRIF0ibRWq1cx_AqHJuAu-r7uTxvq2i76aU-bqMDQ0D33zRnvY3HwtO3exAvjtqP-s3tiPZuuJp9xsfyYT8ZFjABGxY44MwZYW5Kc5cY5qrTXSMp6J7NK-TSt2ddOpjlUDODJsoGac_YGNOqeGPz_PrzlMbQ7DNfyz10-3PoGOsI87g</recordid><startdate>20210815</startdate><enddate>20210815</enddate><creator>Austin, Jacob</creator><creator>Odena, Augustus</creator><creator>Nye, Maxwell</creator><creator>Bosma, Maarten</creator><creator>Michalewski, Henryk</creator><creator>Dohan, David</creator><creator>Jiang, Ellen</creator><creator>Cai, Carrie</creator><creator>Terry, Michael</creator><creator>Le, Quoc</creator><creator>Sutton, Charles</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210815</creationdate><title>Program Synthesis with Large Language Models</title><author>Austin, Jacob ; Odena, Augustus ; Nye, Maxwell ; Bosma, Maarten ; Michalewski, Henryk ; Dohan, David ; Jiang, Ellen ; Cai, Carrie ; Terry, Michael ; Le, Quoc ; Sutton, Charles</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1152-8ef6551f37e0f69588eb3d3ae27d806b2d44cfdc80491bf11de7f51cf9fd513a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Programming Languages</topic><toplevel>online_resources</toplevel><creatorcontrib>Austin, Jacob</creatorcontrib><creatorcontrib>Odena, Augustus</creatorcontrib><creatorcontrib>Nye, Maxwell</creatorcontrib><creatorcontrib>Bosma, Maarten</creatorcontrib><creatorcontrib>Michalewski, Henryk</creatorcontrib><creatorcontrib>Dohan, David</creatorcontrib><creatorcontrib>Jiang, Ellen</creatorcontrib><creatorcontrib>Cai, Carrie</creatorcontrib><creatorcontrib>Terry, Michael</creatorcontrib><creatorcontrib>Le, Quoc</creatorcontrib><creatorcontrib>Sutton, Charles</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Austin, Jacob</au><au>Odena, Augustus</au><au>Nye, Maxwell</au><au>Bosma, Maarten</au><au>Michalewski, Henryk</au><au>Dohan, David</au><au>Jiang, Ellen</au><au>Cai, Carrie</au><au>Terry, Michael</au><au>Le, Quoc</au><au>Sutton, Charles</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Program Synthesis with Large Language Models</atitle><date>2021-08-15</date><risdate>2021</risdate><abstract>This paper explores the limits of the current generation of large language models for program synthesis in general purpose programming languages. We evaluate a collection of such models (with between 244M and 137B parameters) on two new benchmarks, MBPP and MathQA-Python, in both the few-shot and fine-tuning regimes. Our benchmarks are designed to measure the ability of these models to synthesize short Python programs from natural language descriptions. The Mostly Basic Programming Problems (MBPP) dataset contains 974 programming tasks, designed to be solvable by entry-level programmers. The MathQA-Python dataset, a Python version of the MathQA benchmark, contains 23914 problems that evaluate the ability of the models to synthesize code from more complex text. On both datasets, we find that synthesis performance scales log-linearly with model size. Our largest models, even without finetuning on a code dataset, can synthesize solutions to 59.6 percent of the problems from MBPP using few-shot learning with a well-designed prompt. Fine-tuning on a held-out portion of the dataset improves performance by about 10 percentage points across most model sizes. On the MathQA-Python dataset, the largest fine-tuned model achieves 83.8 percent accuracy. Going further, we study the model's ability to engage in dialog about code, incorporating human feedback to improve its solutions. We find that natural language feedback from a human halves the error rate compared to the model's initial prediction. Additionally, we conduct an error analysis to shed light on where these models fall short and what types of programs are most difficult to generate. Finally, we explore the semantic grounding of these models by fine-tuning them to predict the results of program execution. We find that even our best models are generally unable to predict the output of a program given a specific input.</abstract><doi>10.48550/arxiv.2108.07732</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2108.07732
ispartof
issn
language eng
recordid cdi_arxiv_primary_2108_07732
source arXiv.org
subjects Computer Science - Learning
Computer Science - Programming Languages
title Program Synthesis with Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T13%3A12%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Program%20Synthesis%20with%20Large%20Language%20Models&rft.au=Austin,%20Jacob&rft.date=2021-08-15&rft_id=info:doi/10.48550/arxiv.2108.07732&rft_dat=%3Carxiv_GOX%3E2108_07732%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true