CodeArt: Better Code Models by Attention Regularization When Symbols Are Lacking
Transformer based code models have impressive performance in many software engineering tasks. However, their effectiveness degrades when symbols are missing or not informative. The reason is that the model may not learn to pay attention to the right correlations/contexts without the help of symbols....
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Transformer based code models have impressive performance in many software
engineering tasks. However, their effectiveness degrades when symbols are
missing or not informative. The reason is that the model may not learn to pay
attention to the right correlations/contexts without the help of symbols. We
propose a new method to pre-train general code models when symbols are lacking.
We observe that in such cases, programs degenerate to something written in a
very primitive language. We hence propose to use program analysis to extract
contexts a priori (instead of relying on symbols and masked language modeling
as in vanilla models). We then leverage a novel attention masking method to
only allow the model attending to these contexts, e.g., bi-directional program
dependence transitive closures and token co-occurrences. In the meantime, the
inherent self-attention mechanism is utilized to learn which of the allowed
attentions are more important compared to others. To realize the idea, we
enhance the vanilla tokenization and model architecture of a BERT model,
construct and utilize attention masks, and introduce a new pre-training
algorithm. We pre-train this BERT-like model from scratch, using a dataset of
26 million stripped binary functions with explicit program dependence
information extracted by our tool. We apply the model in three downstream
tasks: binary similarity, type inference, and malware family classification.
Our pre-trained model can improve the SOTAs in these tasks from 53% to 64%, 49%
to 60%, and 74% to 94%, respectively. It also substantially outperforms other
general pre-training techniques of code understanding models. |
---|---|
DOI: | 10.48550/arxiv.2402.11842 |