Code Vulnerability Detection: A Comparative Analysis of Emerging Large Language Models
The growing trend of vulnerability issues in software development as a result of a large dependence on open-source projects has received considerable attention recently. This paper investigates the effectiveness of Large Language Models (LLMs) in identifying vulnerabilities within codebases, with a...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The growing trend of vulnerability issues in software development as a result
of a large dependence on open-source projects has received considerable
attention recently. This paper investigates the effectiveness of Large Language
Models (LLMs) in identifying vulnerabilities within codebases, with a focus on
the latest advancements in LLM technology. Through a comparative analysis, we
assess the performance of emerging LLMs, specifically Llama, CodeLlama, Gemma,
and CodeGemma, alongside established state-of-the-art models such as BERT,
RoBERTa, and GPT-3. Our study aims to shed light on the capabilities of LLMs in
vulnerability detection, contributing to the enhancement of software security
practices across diverse open-source repositories. We observe that CodeGemma
achieves the highest F1-score of 58\ and a Recall of 87\, amongst the recent
additions of large language models to detect software security vulnerabilities. |
---|---|
DOI: | 10.48550/arxiv.2409.10490 |