Unveiling Project-Specific Bias in Neural Code Models
Deep learning has introduced significant improvements in many software analysis tasks. Although the Large Language Models (LLMs) based neural code models demonstrate commendable performance when trained and tested within the intra-project independent and identically distributed (IID) setting, they o...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning has introduced significant improvements in many software
analysis tasks. Although the Large Language Models (LLMs) based neural code
models demonstrate commendable performance when trained and tested within the
intra-project independent and identically distributed (IID) setting, they often
struggle to generalize effectively to real-world inter-project
out-of-distribution (OOD) data. In this work, we show that this phenomenon is
caused by the heavy reliance on project-specific shortcuts for prediction
instead of ground-truth evidence. We propose a Cond-Idf measurement to
interpret this behavior, which quantifies the relatedness of a token with a
label and its project-specificness. The strong correlation between model
behavior and the proposed measurement indicates that without proper
regularization, models tend to leverage spurious statistical cues for
prediction. Equipped with these observations, we propose a novel bias
mitigation mechanism that regularizes the model's learning behavior by
leveraging latent logic relations among samples. Experimental results on two
representative program analysis tasks indicate that our mitigation framework
can improve both inter-project OOD generalization and adversarial robustness,
while not sacrificing accuracy on intra-project IID data. |
---|---|
DOI: | 10.48550/arxiv.2201.07381 |