A Comparative Analysis of Large Language Models for Code Documentation Generation

This paper presents a comprehensive comparative analysis of Large Language Models (LLMs) for generation of code documentation. Code documentation is an essential part of the software writing process. The paper evaluates models such as GPT-3.5, GPT-4, Bard, Llama2, and Starchat on various parameters...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-04
Hauptverfasser: Shubhang Shekhar Dvivedi, Vyshnav Vijay, Sai Leela Rahul Pujari, Lodh, Shoumik, Kumar, Dhruv
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Shubhang Shekhar Dvivedi
Vyshnav Vijay
Sai Leela Rahul Pujari
Lodh, Shoumik
Kumar, Dhruv
description This paper presents a comprehensive comparative analysis of Large Language Models (LLMs) for generation of code documentation. Code documentation is an essential part of the software writing process. The paper evaluates models such as GPT-3.5, GPT-4, Bard, Llama2, and Starchat on various parameters like Accuracy, Completeness, Relevance, Understandability, Readability and Time Taken for different levels of code documentation. Our evaluation employs a checklist-based system to minimize subjectivity, providing a more objective assessment. We find that, barring Starchat, all LLMs consistently outperform the original documentation. Notably, closed-source models GPT-3.5, GPT-4, and Bard exhibit superior performance across various parameters compared to open-source/source-available LLMs, namely LLama 2 and StarChat. Considering the time taken for generation, GPT-4 demonstrated the longest duration, followed by Llama2, Bard, with ChatGPT and Starchat having comparable generation times. Additionally, file level documentation had a considerably worse performance across all parameters (except for time taken) as compared to inline and function level documentation.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2903732872</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2903732872</sourcerecordid><originalsourceid>FETCH-proquest_journals_29037328723</originalsourceid><addsrcrecordid>eNqNjc0KgkAUhYcgKMp3GGgt2J1MW4r9LWoRtJchr6LoXJvrBL19Q_QAbc75FufjTMQclFqH6QZgJgLmNooi2CYQx2oubpnMqR-01WPzQpkZ3b25YUmVvGhbo09TO-3hSiV2LCuy3ihR7unhejSjF8nIExq0X1yKaaU7xuDXC7E6Hu75ORwsPR3yWLTkrP_hAnaRShSkCaj_Vh9Sij7c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2903732872</pqid></control><display><type>article</type><title>A Comparative Analysis of Large Language Models for Code Documentation Generation</title><source>Free E- Journals</source><creator>Shubhang Shekhar Dvivedi ; Vyshnav Vijay ; Sai Leela Rahul Pujari ; Lodh, Shoumik ; Kumar, Dhruv</creator><creatorcontrib>Shubhang Shekhar Dvivedi ; Vyshnav Vijay ; Sai Leela Rahul Pujari ; Lodh, Shoumik ; Kumar, Dhruv</creatorcontrib><description>This paper presents a comprehensive comparative analysis of Large Language Models (LLMs) for generation of code documentation. Code documentation is an essential part of the software writing process. The paper evaluates models such as GPT-3.5, GPT-4, Bard, Llama2, and Starchat on various parameters like Accuracy, Completeness, Relevance, Understandability, Readability and Time Taken for different levels of code documentation. Our evaluation employs a checklist-based system to minimize subjectivity, providing a more objective assessment. We find that, barring Starchat, all LLMs consistently outperform the original documentation. Notably, closed-source models GPT-3.5, GPT-4, and Bard exhibit superior performance across various parameters compared to open-source/source-available LLMs, namely LLama 2 and StarChat. Considering the time taken for generation, GPT-4 demonstrated the longest duration, followed by Llama2, Bard, with ChatGPT and Starchat having comparable generation times. Additionally, file level documentation had a considerably worse performance across all parameters (except for time taken) as compared to inline and function level documentation.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial intelligence ; Comparative analysis ; Documentation ; Large language models ; Mathematical models ; Parameters</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Shubhang Shekhar Dvivedi</creatorcontrib><creatorcontrib>Vyshnav Vijay</creatorcontrib><creatorcontrib>Sai Leela Rahul Pujari</creatorcontrib><creatorcontrib>Lodh, Shoumik</creatorcontrib><creatorcontrib>Kumar, Dhruv</creatorcontrib><title>A Comparative Analysis of Large Language Models for Code Documentation Generation</title><title>arXiv.org</title><description>This paper presents a comprehensive comparative analysis of Large Language Models (LLMs) for generation of code documentation. Code documentation is an essential part of the software writing process. The paper evaluates models such as GPT-3.5, GPT-4, Bard, Llama2, and Starchat on various parameters like Accuracy, Completeness, Relevance, Understandability, Readability and Time Taken for different levels of code documentation. Our evaluation employs a checklist-based system to minimize subjectivity, providing a more objective assessment. We find that, barring Starchat, all LLMs consistently outperform the original documentation. Notably, closed-source models GPT-3.5, GPT-4, and Bard exhibit superior performance across various parameters compared to open-source/source-available LLMs, namely LLama 2 and StarChat. Considering the time taken for generation, GPT-4 demonstrated the longest duration, followed by Llama2, Bard, with ChatGPT and Starchat having comparable generation times. Additionally, file level documentation had a considerably worse performance across all parameters (except for time taken) as compared to inline and function level documentation.</description><subject>Artificial intelligence</subject><subject>Comparative analysis</subject><subject>Documentation</subject><subject>Large language models</subject><subject>Mathematical models</subject><subject>Parameters</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjc0KgkAUhYcgKMp3GGgt2J1MW4r9LWoRtJchr6LoXJvrBL19Q_QAbc75FufjTMQclFqH6QZgJgLmNooi2CYQx2oubpnMqR-01WPzQpkZ3b25YUmVvGhbo09TO-3hSiV2LCuy3ihR7unhejSjF8nIExq0X1yKaaU7xuDXC7E6Hu75ORwsPR3yWLTkrP_hAnaRShSkCaj_Vh9Sij7c</recordid><startdate>20240427</startdate><enddate>20240427</enddate><creator>Shubhang Shekhar Dvivedi</creator><creator>Vyshnav Vijay</creator><creator>Sai Leela Rahul Pujari</creator><creator>Lodh, Shoumik</creator><creator>Kumar, Dhruv</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240427</creationdate><title>A Comparative Analysis of Large Language Models for Code Documentation Generation</title><author>Shubhang Shekhar Dvivedi ; Vyshnav Vijay ; Sai Leela Rahul Pujari ; Lodh, Shoumik ; Kumar, Dhruv</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29037328723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial intelligence</topic><topic>Comparative analysis</topic><topic>Documentation</topic><topic>Large language models</topic><topic>Mathematical models</topic><topic>Parameters</topic><toplevel>online_resources</toplevel><creatorcontrib>Shubhang Shekhar Dvivedi</creatorcontrib><creatorcontrib>Vyshnav Vijay</creatorcontrib><creatorcontrib>Sai Leela Rahul Pujari</creatorcontrib><creatorcontrib>Lodh, Shoumik</creatorcontrib><creatorcontrib>Kumar, Dhruv</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shubhang Shekhar Dvivedi</au><au>Vyshnav Vijay</au><au>Sai Leela Rahul Pujari</au><au>Lodh, Shoumik</au><au>Kumar, Dhruv</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>A Comparative Analysis of Large Language Models for Code Documentation Generation</atitle><jtitle>arXiv.org</jtitle><date>2024-04-27</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>This paper presents a comprehensive comparative analysis of Large Language Models (LLMs) for generation of code documentation. Code documentation is an essential part of the software writing process. The paper evaluates models such as GPT-3.5, GPT-4, Bard, Llama2, and Starchat on various parameters like Accuracy, Completeness, Relevance, Understandability, Readability and Time Taken for different levels of code documentation. Our evaluation employs a checklist-based system to minimize subjectivity, providing a more objective assessment. We find that, barring Starchat, all LLMs consistently outperform the original documentation. Notably, closed-source models GPT-3.5, GPT-4, and Bard exhibit superior performance across various parameters compared to open-source/source-available LLMs, namely LLama 2 and StarChat. Considering the time taken for generation, GPT-4 demonstrated the longest duration, followed by Llama2, Bard, with ChatGPT and Starchat having comparable generation times. Additionally, file level documentation had a considerably worse performance across all parameters (except for time taken) as compared to inline and function level documentation.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2903732872
source Free E- Journals
subjects Artificial intelligence
Comparative analysis
Documentation
Large language models
Mathematical models
Parameters
title A Comparative Analysis of Large Language Models for Code Documentation Generation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T07%3A29%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=A%20Comparative%20Analysis%20of%20Large%20Language%20Models%20for%20Code%20Documentation%20Generation&rft.jtitle=arXiv.org&rft.au=Shubhang%20Shekhar%20Dvivedi&rft.date=2024-04-27&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2903732872%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2903732872&rft_id=info:pmid/&rfr_iscdi=true