Investigating Large Language Models for Code Vulnerability Detection: An Experimental Study
Code vulnerability detection (CVD) is essential for addressing and preventing system security issues, playing a crucial role in ensuring software security. Previous learning-based vulnerability detection methods rely on either fine-tuning medium-size sequence models or training smaller neural networ...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Code vulnerability detection (CVD) is essential for addressing and preventing
system security issues, playing a crucial role in ensuring software security.
Previous learning-based vulnerability detection methods rely on either
fine-tuning medium-size sequence models or training smaller neural networks
from scratch. Recent advancements in large pre-trained language models (LLMs)
have showcased remarkable capabilities in various code intelligence tasks
including code understanding and generation. However, the effectiveness of LLMs
in detecting code vulnerabilities is largely under-explored. This work aims to
investigate the gap by fine-tuning LLMs for the CVD task, involving four
widely-used open-source LLMs. We also implement other five previous graph-based
or medium-size sequence models for comparison. Experiments are conducted on
five commonly-used CVD datasets, including both the part of short samples and
long samples. In addition, we conduct quantitative experiments to investigate
the class imbalance issue and the model's performance on samples of different
lengths, which are rarely studied in previous works. To better facilitate
communities, we open-source all codes and resources of this study in
https://github.com/SakiRinn/LLM4CVD and
https://huggingface.co/datasets/xuefen/VulResource. |
---|---|
DOI: | 10.48550/arxiv.2412.18260 |