CodeSAM: Source Code Representation Learning by Infusing Self-Attention with Multi-Code-View Graphs
Machine Learning (ML) for software engineering (SE) has gained prominence due to its ability to significantly enhance the performance of various SE applications. This progress is largely attributed to the development of generalizable source code representations that effectively capture the syntactic...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Machine Learning (ML) for software engineering (SE) has gained prominence due
to its ability to significantly enhance the performance of various SE
applications. This progress is largely attributed to the development of
generalizable source code representations that effectively capture the
syntactic and semantic characteristics of code. In recent years, pre-trained
transformer-based models, inspired by natural language processing (NLP), have
shown remarkable success in SE tasks. However, source code contains structural
and semantic properties embedded within its grammar, which can be extracted
from structured code-views like the Abstract Syntax Tree (AST), Data-Flow Graph
(DFG), and Control-Flow Graph (CFG). These code-views can complement NLP
techniques, further improving SE tasks. Unfortunately, there are no flexible
frameworks to infuse arbitrary code-views into existing transformer-based
models effectively. Therefore, in this work, we propose CodeSAM, a novel
scalable framework to infuse multiple code-views into transformer-based models
by creating self-attention masks. We use CodeSAM to fine-tune a small language
model (SLM) like CodeBERT on the downstream SE tasks of semantic code search,
code clone detection, and program classification. Experimental results show
that by using this technique, we improve downstream performance when compared
to SLMs like GraphCodeBERT and CodeBERT on all three tasks by utilizing
individual code-views or a combination of code-views during fine-tuning. We
believe that these results are indicative that techniques like CodeSAM can help
create compact yet performant code SLMs that fit in resource constrained
settings. |
---|---|
DOI: | 10.48550/arxiv.2411.14611 |