Evaluating Large Language Models Trained on Code

We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing p...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chen, Mark, Tworek, Jerry, Jun, Heewoo, Yuan, Qiming, Pinto, Henrique Ponde de Oliveira, Kaplan, Jared, Edwards, Harri, Burda, Yuri, Joseph, Nicholas, Brockman, Greg, Ray, Alex, Puri, Raul, Krueger, Gretchen, Petrov, Michael, Khlaaf, Heidy, Sastry, Girish, Mishkin, Pamela, Chan, Brooke, Gray, Scott, Ryder, Nick, Pavlov, Mikhail, Power, Alethea, Kaiser, Lukasz, Bavarian, Mohammad, Winter, Clemens, Tillet, Philippe, Such, Felipe Petroski, Cummings, Dave, Plappert, Matthias, Chantzis, Fotios, Barnes, Elizabeth, Herbert-Voss, Ariel, Guss, William Hebgen, Nichol, Alex, Paino, Alex, Tezak, Nikolas, Tang, Jie, Babuschkin, Igor, Balaji, Suchir, Jain, Shantanu, Saunders, William, Hesse, Christopher, Carr, Andrew N, Leike, Jan, Achiam, Josh, Misra, Vedant, Morikawa, Evan, Radford, Alec, Knight, Matthew, Brundage, Miles, Murati, Mira, Mayer, Katie, Welinder, Peter, McGrew, Bob, Amodei, Dario, McCandlish, Sam, Sutskever, Ilya, Zaremba, Wojciech
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chen, Mark
Tworek, Jerry
Jun, Heewoo
Yuan, Qiming
Pinto, Henrique Ponde de Oliveira
Kaplan, Jared
Edwards, Harri
Burda, Yuri
Joseph, Nicholas
Brockman, Greg
Ray, Alex
Puri, Raul
Krueger, Gretchen
Petrov, Michael
Khlaaf, Heidy
Sastry, Girish
Mishkin, Pamela
Chan, Brooke
Gray, Scott
Ryder, Nick
Pavlov, Mikhail
Power, Alethea
Kaiser, Lukasz
Bavarian, Mohammad
Winter, Clemens
Tillet, Philippe
Such, Felipe Petroski
Cummings, Dave
Plappert, Matthias
Chantzis, Fotios
Barnes, Elizabeth
Herbert-Voss, Ariel
Guss, William Hebgen
Nichol, Alex
Paino, Alex
Tezak, Nikolas
Tang, Jie
Babuschkin, Igor
Balaji, Suchir
Jain, Shantanu
Saunders, William
Hesse, Christopher
Carr, Andrew N
Leike, Jan
Achiam, Josh
Misra, Vedant
Morikawa, Evan
Radford, Alec
Knight, Matthew
Brundage, Miles
Murati, Mira
Mayer, Katie
Welinder, Peter
McGrew, Bob
Amodei, Dario
McCandlish, Sam
Sutskever, Ilya
Zaremba, Wojciech
description We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.
doi_str_mv 10.48550/arxiv.2107.03374
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2107_03374</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2107_03374</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1154-baa07259774a1a43f3d9f196d8c0f9b807de6c5f183be9d858fc0b135247d4873</originalsourceid><addsrcrecordid>eNotjssOgjAURLtxYdQPcCU_AN7S1rZLQ3wlGDe4JhfaEhIEU9To34uPzczkLCaHkDmFiCshYIn-WT-imIKMgDHJxwQ2D2zueKvbKkjRV3bItrrjMI6dsU0fZB7r1pqga4NkIFMyctj0dvbvCTlvN1myD9PT7pCs0xApFTwsEEHGQkvJkSJnjhntqF4ZVYLThQJp7KoUjipWWG2UUK6EgjIRc2m4kmxCFr_fr3N-9fUF_Sv_uOdfd_YG8ZI9JA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Evaluating Large Language Models Trained on Code</title><source>arXiv.org</source><creator>Chen, Mark ; Tworek, Jerry ; Jun, Heewoo ; Yuan, Qiming ; Pinto, Henrique Ponde de Oliveira ; Kaplan, Jared ; Edwards, Harri ; Burda, Yuri ; Joseph, Nicholas ; Brockman, Greg ; Ray, Alex ; Puri, Raul ; Krueger, Gretchen ; Petrov, Michael ; Khlaaf, Heidy ; Sastry, Girish ; Mishkin, Pamela ; Chan, Brooke ; Gray, Scott ; Ryder, Nick ; Pavlov, Mikhail ; Power, Alethea ; Kaiser, Lukasz ; Bavarian, Mohammad ; Winter, Clemens ; Tillet, Philippe ; Such, Felipe Petroski ; Cummings, Dave ; Plappert, Matthias ; Chantzis, Fotios ; Barnes, Elizabeth ; Herbert-Voss, Ariel ; Guss, William Hebgen ; Nichol, Alex ; Paino, Alex ; Tezak, Nikolas ; Tang, Jie ; Babuschkin, Igor ; Balaji, Suchir ; Jain, Shantanu ; Saunders, William ; Hesse, Christopher ; Carr, Andrew N ; Leike, Jan ; Achiam, Josh ; Misra, Vedant ; Morikawa, Evan ; Radford, Alec ; Knight, Matthew ; Brundage, Miles ; Murati, Mira ; Mayer, Katie ; Welinder, Peter ; McGrew, Bob ; Amodei, Dario ; McCandlish, Sam ; Sutskever, Ilya ; Zaremba, Wojciech</creator><creatorcontrib>Chen, Mark ; Tworek, Jerry ; Jun, Heewoo ; Yuan, Qiming ; Pinto, Henrique Ponde de Oliveira ; Kaplan, Jared ; Edwards, Harri ; Burda, Yuri ; Joseph, Nicholas ; Brockman, Greg ; Ray, Alex ; Puri, Raul ; Krueger, Gretchen ; Petrov, Michael ; Khlaaf, Heidy ; Sastry, Girish ; Mishkin, Pamela ; Chan, Brooke ; Gray, Scott ; Ryder, Nick ; Pavlov, Mikhail ; Power, Alethea ; Kaiser, Lukasz ; Bavarian, Mohammad ; Winter, Clemens ; Tillet, Philippe ; Such, Felipe Petroski ; Cummings, Dave ; Plappert, Matthias ; Chantzis, Fotios ; Barnes, Elizabeth ; Herbert-Voss, Ariel ; Guss, William Hebgen ; Nichol, Alex ; Paino, Alex ; Tezak, Nikolas ; Tang, Jie ; Babuschkin, Igor ; Balaji, Suchir ; Jain, Shantanu ; Saunders, William ; Hesse, Christopher ; Carr, Andrew N ; Leike, Jan ; Achiam, Josh ; Misra, Vedant ; Morikawa, Evan ; Radford, Alec ; Knight, Matthew ; Brundage, Miles ; Murati, Mira ; Mayer, Katie ; Welinder, Peter ; McGrew, Bob ; Amodei, Dario ; McCandlish, Sam ; Sutskever, Ilya ; Zaremba, Wojciech</creatorcontrib><description>We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.</description><identifier>DOI: 10.48550/arxiv.2107.03374</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2021-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1154-baa07259774a1a43f3d9f196d8c0f9b807de6c5f183be9d858fc0b135247d4873</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2107.03374$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2107.03374$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Mark</creatorcontrib><creatorcontrib>Tworek, Jerry</creatorcontrib><creatorcontrib>Jun, Heewoo</creatorcontrib><creatorcontrib>Yuan, Qiming</creatorcontrib><creatorcontrib>Pinto, Henrique Ponde de Oliveira</creatorcontrib><creatorcontrib>Kaplan, Jared</creatorcontrib><creatorcontrib>Edwards, Harri</creatorcontrib><creatorcontrib>Burda, Yuri</creatorcontrib><creatorcontrib>Joseph, Nicholas</creatorcontrib><creatorcontrib>Brockman, Greg</creatorcontrib><creatorcontrib>Ray, Alex</creatorcontrib><creatorcontrib>Puri, Raul</creatorcontrib><creatorcontrib>Krueger, Gretchen</creatorcontrib><creatorcontrib>Petrov, Michael</creatorcontrib><creatorcontrib>Khlaaf, Heidy</creatorcontrib><creatorcontrib>Sastry, Girish</creatorcontrib><creatorcontrib>Mishkin, Pamela</creatorcontrib><creatorcontrib>Chan, Brooke</creatorcontrib><creatorcontrib>Gray, Scott</creatorcontrib><creatorcontrib>Ryder, Nick</creatorcontrib><creatorcontrib>Pavlov, Mikhail</creatorcontrib><creatorcontrib>Power, Alethea</creatorcontrib><creatorcontrib>Kaiser, Lukasz</creatorcontrib><creatorcontrib>Bavarian, Mohammad</creatorcontrib><creatorcontrib>Winter, Clemens</creatorcontrib><creatorcontrib>Tillet, Philippe</creatorcontrib><creatorcontrib>Such, Felipe Petroski</creatorcontrib><creatorcontrib>Cummings, Dave</creatorcontrib><creatorcontrib>Plappert, Matthias</creatorcontrib><creatorcontrib>Chantzis, Fotios</creatorcontrib><creatorcontrib>Barnes, Elizabeth</creatorcontrib><creatorcontrib>Herbert-Voss, Ariel</creatorcontrib><creatorcontrib>Guss, William Hebgen</creatorcontrib><creatorcontrib>Nichol, Alex</creatorcontrib><creatorcontrib>Paino, Alex</creatorcontrib><creatorcontrib>Tezak, Nikolas</creatorcontrib><creatorcontrib>Tang, Jie</creatorcontrib><creatorcontrib>Babuschkin, Igor</creatorcontrib><creatorcontrib>Balaji, Suchir</creatorcontrib><creatorcontrib>Jain, Shantanu</creatorcontrib><creatorcontrib>Saunders, William</creatorcontrib><creatorcontrib>Hesse, Christopher</creatorcontrib><creatorcontrib>Carr, Andrew N</creatorcontrib><creatorcontrib>Leike, Jan</creatorcontrib><creatorcontrib>Achiam, Josh</creatorcontrib><creatorcontrib>Misra, Vedant</creatorcontrib><creatorcontrib>Morikawa, Evan</creatorcontrib><creatorcontrib>Radford, Alec</creatorcontrib><creatorcontrib>Knight, Matthew</creatorcontrib><creatorcontrib>Brundage, Miles</creatorcontrib><creatorcontrib>Murati, Mira</creatorcontrib><creatorcontrib>Mayer, Katie</creatorcontrib><creatorcontrib>Welinder, Peter</creatorcontrib><creatorcontrib>McGrew, Bob</creatorcontrib><creatorcontrib>Amodei, Dario</creatorcontrib><creatorcontrib>McCandlish, Sam</creatorcontrib><creatorcontrib>Sutskever, Ilya</creatorcontrib><creatorcontrib>Zaremba, Wojciech</creatorcontrib><title>Evaluating Large Language Models Trained on Code</title><description>We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjssOgjAURLtxYdQPcCU_AN7S1rZLQ3wlGDe4JhfaEhIEU9To34uPzczkLCaHkDmFiCshYIn-WT-imIKMgDHJxwQ2D2zueKvbKkjRV3bItrrjMI6dsU0fZB7r1pqga4NkIFMyctj0dvbvCTlvN1myD9PT7pCs0xApFTwsEEHGQkvJkSJnjhntqF4ZVYLThQJp7KoUjipWWG2UUK6EgjIRc2m4kmxCFr_fr3N-9fUF_Sv_uOdfd_YG8ZI9JA</recordid><startdate>20210707</startdate><enddate>20210707</enddate><creator>Chen, Mark</creator><creator>Tworek, Jerry</creator><creator>Jun, Heewoo</creator><creator>Yuan, Qiming</creator><creator>Pinto, Henrique Ponde de Oliveira</creator><creator>Kaplan, Jared</creator><creator>Edwards, Harri</creator><creator>Burda, Yuri</creator><creator>Joseph, Nicholas</creator><creator>Brockman, Greg</creator><creator>Ray, Alex</creator><creator>Puri, Raul</creator><creator>Krueger, Gretchen</creator><creator>Petrov, Michael</creator><creator>Khlaaf, Heidy</creator><creator>Sastry, Girish</creator><creator>Mishkin, Pamela</creator><creator>Chan, Brooke</creator><creator>Gray, Scott</creator><creator>Ryder, Nick</creator><creator>Pavlov, Mikhail</creator><creator>Power, Alethea</creator><creator>Kaiser, Lukasz</creator><creator>Bavarian, Mohammad</creator><creator>Winter, Clemens</creator><creator>Tillet, Philippe</creator><creator>Such, Felipe Petroski</creator><creator>Cummings, Dave</creator><creator>Plappert, Matthias</creator><creator>Chantzis, Fotios</creator><creator>Barnes, Elizabeth</creator><creator>Herbert-Voss, Ariel</creator><creator>Guss, William Hebgen</creator><creator>Nichol, Alex</creator><creator>Paino, Alex</creator><creator>Tezak, Nikolas</creator><creator>Tang, Jie</creator><creator>Babuschkin, Igor</creator><creator>Balaji, Suchir</creator><creator>Jain, Shantanu</creator><creator>Saunders, William</creator><creator>Hesse, Christopher</creator><creator>Carr, Andrew N</creator><creator>Leike, Jan</creator><creator>Achiam, Josh</creator><creator>Misra, Vedant</creator><creator>Morikawa, Evan</creator><creator>Radford, Alec</creator><creator>Knight, Matthew</creator><creator>Brundage, Miles</creator><creator>Murati, Mira</creator><creator>Mayer, Katie</creator><creator>Welinder, Peter</creator><creator>McGrew, Bob</creator><creator>Amodei, Dario</creator><creator>McCandlish, Sam</creator><creator>Sutskever, Ilya</creator><creator>Zaremba, Wojciech</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210707</creationdate><title>Evaluating Large Language Models Trained on Code</title><author>Chen, Mark ; Tworek, Jerry ; Jun, Heewoo ; Yuan, Qiming ; Pinto, Henrique Ponde de Oliveira ; Kaplan, Jared ; Edwards, Harri ; Burda, Yuri ; Joseph, Nicholas ; Brockman, Greg ; Ray, Alex ; Puri, Raul ; Krueger, Gretchen ; Petrov, Michael ; Khlaaf, Heidy ; Sastry, Girish ; Mishkin, Pamela ; Chan, Brooke ; Gray, Scott ; Ryder, Nick ; Pavlov, Mikhail ; Power, Alethea ; Kaiser, Lukasz ; Bavarian, Mohammad ; Winter, Clemens ; Tillet, Philippe ; Such, Felipe Petroski ; Cummings, Dave ; Plappert, Matthias ; Chantzis, Fotios ; Barnes, Elizabeth ; Herbert-Voss, Ariel ; Guss, William Hebgen ; Nichol, Alex ; Paino, Alex ; Tezak, Nikolas ; Tang, Jie ; Babuschkin, Igor ; Balaji, Suchir ; Jain, Shantanu ; Saunders, William ; Hesse, Christopher ; Carr, Andrew N ; Leike, Jan ; Achiam, Josh ; Misra, Vedant ; Morikawa, Evan ; Radford, Alec ; Knight, Matthew ; Brundage, Miles ; Murati, Mira ; Mayer, Katie ; Welinder, Peter ; McGrew, Bob ; Amodei, Dario ; McCandlish, Sam ; Sutskever, Ilya ; Zaremba, Wojciech</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1154-baa07259774a1a43f3d9f196d8c0f9b807de6c5f183be9d858fc0b135247d4873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Mark</creatorcontrib><creatorcontrib>Tworek, Jerry</creatorcontrib><creatorcontrib>Jun, Heewoo</creatorcontrib><creatorcontrib>Yuan, Qiming</creatorcontrib><creatorcontrib>Pinto, Henrique Ponde de Oliveira</creatorcontrib><creatorcontrib>Kaplan, Jared</creatorcontrib><creatorcontrib>Edwards, Harri</creatorcontrib><creatorcontrib>Burda, Yuri</creatorcontrib><creatorcontrib>Joseph, Nicholas</creatorcontrib><creatorcontrib>Brockman, Greg</creatorcontrib><creatorcontrib>Ray, Alex</creatorcontrib><creatorcontrib>Puri, Raul</creatorcontrib><creatorcontrib>Krueger, Gretchen</creatorcontrib><creatorcontrib>Petrov, Michael</creatorcontrib><creatorcontrib>Khlaaf, Heidy</creatorcontrib><creatorcontrib>Sastry, Girish</creatorcontrib><creatorcontrib>Mishkin, Pamela</creatorcontrib><creatorcontrib>Chan, Brooke</creatorcontrib><creatorcontrib>Gray, Scott</creatorcontrib><creatorcontrib>Ryder, Nick</creatorcontrib><creatorcontrib>Pavlov, Mikhail</creatorcontrib><creatorcontrib>Power, Alethea</creatorcontrib><creatorcontrib>Kaiser, Lukasz</creatorcontrib><creatorcontrib>Bavarian, Mohammad</creatorcontrib><creatorcontrib>Winter, Clemens</creatorcontrib><creatorcontrib>Tillet, Philippe</creatorcontrib><creatorcontrib>Such, Felipe Petroski</creatorcontrib><creatorcontrib>Cummings, Dave</creatorcontrib><creatorcontrib>Plappert, Matthias</creatorcontrib><creatorcontrib>Chantzis, Fotios</creatorcontrib><creatorcontrib>Barnes, Elizabeth</creatorcontrib><creatorcontrib>Herbert-Voss, Ariel</creatorcontrib><creatorcontrib>Guss, William Hebgen</creatorcontrib><creatorcontrib>Nichol, Alex</creatorcontrib><creatorcontrib>Paino, Alex</creatorcontrib><creatorcontrib>Tezak, Nikolas</creatorcontrib><creatorcontrib>Tang, Jie</creatorcontrib><creatorcontrib>Babuschkin, Igor</creatorcontrib><creatorcontrib>Balaji, Suchir</creatorcontrib><creatorcontrib>Jain, Shantanu</creatorcontrib><creatorcontrib>Saunders, William</creatorcontrib><creatorcontrib>Hesse, Christopher</creatorcontrib><creatorcontrib>Carr, Andrew N</creatorcontrib><creatorcontrib>Leike, Jan</creatorcontrib><creatorcontrib>Achiam, Josh</creatorcontrib><creatorcontrib>Misra, Vedant</creatorcontrib><creatorcontrib>Morikawa, Evan</creatorcontrib><creatorcontrib>Radford, Alec</creatorcontrib><creatorcontrib>Knight, Matthew</creatorcontrib><creatorcontrib>Brundage, Miles</creatorcontrib><creatorcontrib>Murati, Mira</creatorcontrib><creatorcontrib>Mayer, Katie</creatorcontrib><creatorcontrib>Welinder, Peter</creatorcontrib><creatorcontrib>McGrew, Bob</creatorcontrib><creatorcontrib>Amodei, Dario</creatorcontrib><creatorcontrib>McCandlish, Sam</creatorcontrib><creatorcontrib>Sutskever, Ilya</creatorcontrib><creatorcontrib>Zaremba, Wojciech</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Mark</au><au>Tworek, Jerry</au><au>Jun, Heewoo</au><au>Yuan, Qiming</au><au>Pinto, Henrique Ponde de Oliveira</au><au>Kaplan, Jared</au><au>Edwards, Harri</au><au>Burda, Yuri</au><au>Joseph, Nicholas</au><au>Brockman, Greg</au><au>Ray, Alex</au><au>Puri, Raul</au><au>Krueger, Gretchen</au><au>Petrov, Michael</au><au>Khlaaf, Heidy</au><au>Sastry, Girish</au><au>Mishkin, Pamela</au><au>Chan, Brooke</au><au>Gray, Scott</au><au>Ryder, Nick</au><au>Pavlov, Mikhail</au><au>Power, Alethea</au><au>Kaiser, Lukasz</au><au>Bavarian, Mohammad</au><au>Winter, Clemens</au><au>Tillet, Philippe</au><au>Such, Felipe Petroski</au><au>Cummings, Dave</au><au>Plappert, Matthias</au><au>Chantzis, Fotios</au><au>Barnes, Elizabeth</au><au>Herbert-Voss, Ariel</au><au>Guss, William Hebgen</au><au>Nichol, Alex</au><au>Paino, Alex</au><au>Tezak, Nikolas</au><au>Tang, Jie</au><au>Babuschkin, Igor</au><au>Balaji, Suchir</au><au>Jain, Shantanu</au><au>Saunders, William</au><au>Hesse, Christopher</au><au>Carr, Andrew N</au><au>Leike, Jan</au><au>Achiam, Josh</au><au>Misra, Vedant</au><au>Morikawa, Evan</au><au>Radford, Alec</au><au>Knight, Matthew</au><au>Brundage, Miles</au><au>Murati, Mira</au><au>Mayer, Katie</au><au>Welinder, Peter</au><au>McGrew, Bob</au><au>Amodei, Dario</au><au>McCandlish, Sam</au><au>Sutskever, Ilya</au><au>Zaremba, Wojciech</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evaluating Large Language Models Trained on Code</atitle><date>2021-07-07</date><risdate>2021</risdate><abstract>We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.</abstract><doi>10.48550/arxiv.2107.03374</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2107.03374
ispartof
issn
language eng
recordid cdi_arxiv_primary_2107_03374
source arXiv.org
subjects Computer Science - Learning
title Evaluating Large Language Models Trained on Code
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T22%3A27%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evaluating%20Large%20Language%20Models%20Trained%20on%20Code&rft.au=Chen,%20Mark&rft.date=2021-07-07&rft_id=info:doi/10.48550/arxiv.2107.03374&rft_dat=%3Carxiv_GOX%3E2107_03374%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true