Analyzing the Performance of Large Language Models on Code Summarization

Large language models (LLMs) such as Llama 2 perform very well on tasks that involve both natural language and source code, particularly code summarization and code generation. We show that for the task of code summarization, the performance of these models on individual examples often depends on th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-04
Hauptverfasser: Haldar, Rajarshi, Hockenmaier, Julia
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Haldar, Rajarshi
Hockenmaier, Julia
description Large language models (LLMs) such as Llama 2 perform very well on tasks that involve both natural language and source code, particularly code summarization and code generation. We show that for the task of code summarization, the performance of these models on individual examples often depends on the amount of (subword) token overlap between the code and the corresponding reference natural language descriptions in the dataset. This token overlap arises because the reference descriptions in standard datasets (corresponding to docstrings in large code bases) are often highly similar to the names of the functions they describe. We also show that this token overlap occurs largely in the function names of the code and compare the relative performance of these models after removing function names versus removing code structure. We also show that using multiple evaluation metrics like BLEU and BERTScore gives us very little additional insight since these metrics are highly correlated with each other.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3039010384</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3039010384</sourcerecordid><originalsourceid>FETCH-proquest_journals_30390103843</originalsourceid><addsrcrecordid>eNqNjbEKwjAURYMgWLT_EHAupHmt1lGK0kFB0L089KW2tIkmzWC_3gx-gMu5Z7hwZiySAGlSZFIuWOxcJ4SQm63Mc4hYtdfYf6ZWN3x8Er-QVcYOqO_EjeIntA0F6sZjkLN5UO-40bwMxq9-GNC2E46t0Ss2V9g7in-7ZOvj4VZWycuatyc31p3xNsRcDQJ2IhVQZPDf6wsUCDue</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3039010384</pqid></control><display><type>article</type><title>Analyzing the Performance of Large Language Models on Code Summarization</title><source>Free E- Journals</source><creator>Haldar, Rajarshi ; Hockenmaier, Julia</creator><creatorcontrib>Haldar, Rajarshi ; Hockenmaier, Julia</creatorcontrib><description>Large language models (LLMs) such as Llama 2 perform very well on tasks that involve both natural language and source code, particularly code summarization and code generation. We show that for the task of code summarization, the performance of these models on individual examples often depends on the amount of (subword) token overlap between the code and the corresponding reference natural language descriptions in the dataset. This token overlap arises because the reference descriptions in standard datasets (corresponding to docstrings in large code bases) are often highly similar to the names of the functions they describe. We also show that this token overlap occurs largely in the function names of the code and compare the relative performance of these models after removing function names versus removing code structure. We also show that using multiple evaluation metrics like BLEU and BERTScore gives us very little additional insight since these metrics are highly correlated with each other.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Datasets ; Descriptions ; Large language models ; Natural language ; Source code</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Haldar, Rajarshi</creatorcontrib><creatorcontrib>Hockenmaier, Julia</creatorcontrib><title>Analyzing the Performance of Large Language Models on Code Summarization</title><title>arXiv.org</title><description>Large language models (LLMs) such as Llama 2 perform very well on tasks that involve both natural language and source code, particularly code summarization and code generation. We show that for the task of code summarization, the performance of these models on individual examples often depends on the amount of (subword) token overlap between the code and the corresponding reference natural language descriptions in the dataset. This token overlap arises because the reference descriptions in standard datasets (corresponding to docstrings in large code bases) are often highly similar to the names of the functions they describe. We also show that this token overlap occurs largely in the function names of the code and compare the relative performance of these models after removing function names versus removing code structure. We also show that using multiple evaluation metrics like BLEU and BERTScore gives us very little additional insight since these metrics are highly correlated with each other.</description><subject>Datasets</subject><subject>Descriptions</subject><subject>Large language models</subject><subject>Natural language</subject><subject>Source code</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjbEKwjAURYMgWLT_EHAupHmt1lGK0kFB0L089KW2tIkmzWC_3gx-gMu5Z7hwZiySAGlSZFIuWOxcJ4SQm63Mc4hYtdfYf6ZWN3x8Er-QVcYOqO_EjeIntA0F6sZjkLN5UO-40bwMxq9-GNC2E46t0Ss2V9g7in-7ZOvj4VZWycuatyc31p3xNsRcDQJ2IhVQZPDf6wsUCDue</recordid><startdate>20240410</startdate><enddate>20240410</enddate><creator>Haldar, Rajarshi</creator><creator>Hockenmaier, Julia</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240410</creationdate><title>Analyzing the Performance of Large Language Models on Code Summarization</title><author>Haldar, Rajarshi ; Hockenmaier, Julia</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30390103843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Datasets</topic><topic>Descriptions</topic><topic>Large language models</topic><topic>Natural language</topic><topic>Source code</topic><toplevel>online_resources</toplevel><creatorcontrib>Haldar, Rajarshi</creatorcontrib><creatorcontrib>Hockenmaier, Julia</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Haldar, Rajarshi</au><au>Hockenmaier, Julia</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Analyzing the Performance of Large Language Models on Code Summarization</atitle><jtitle>arXiv.org</jtitle><date>2024-04-10</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large language models (LLMs) such as Llama 2 perform very well on tasks that involve both natural language and source code, particularly code summarization and code generation. We show that for the task of code summarization, the performance of these models on individual examples often depends on the amount of (subword) token overlap between the code and the corresponding reference natural language descriptions in the dataset. This token overlap arises because the reference descriptions in standard datasets (corresponding to docstrings in large code bases) are often highly similar to the names of the functions they describe. We also show that this token overlap occurs largely in the function names of the code and compare the relative performance of these models after removing function names versus removing code structure. We also show that using multiple evaluation metrics like BLEU and BERTScore gives us very little additional insight since these metrics are highly correlated with each other.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_3039010384
source Free E- Journals
subjects Datasets
Descriptions
Large language models
Natural language
Source code
title Analyzing the Performance of Large Language Models on Code Summarization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T23%3A00%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Analyzing%20the%20Performance%20of%20Large%20Language%20Models%20on%20Code%20Summarization&rft.jtitle=arXiv.org&rft.au=Haldar,%20Rajarshi&rft.date=2024-04-10&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3039010384%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3039010384&rft_id=info:pmid/&rfr_iscdi=true