A code change‐oriented approach to just‐in‐time defect prediction with multiple input semantic fusion

Recent research found that fine‐tuning pre‐trained models is superior to training models from scratch in just‐in‐time (JIT) defect prediction. However, existing approaches using pre‐trained models have their limitations. First, the input length is constrained by the pre‐trained models.Secondly, the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems 2024-12, Vol.41 (12), p.n/a
Hauptverfasser: Huang, Teng, Yu, Hui‐Qun, Fan, Gui‐Sheng, Huang, Zi‐Jie, Wu, Chen‐Yu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent research found that fine‐tuning pre‐trained models is superior to training models from scratch in just‐in‐time (JIT) defect prediction. However, existing approaches using pre‐trained models have their limitations. First, the input length is constrained by the pre‐trained models.Secondly, the inputs are change‐agnostic.To address these limitations, we propose JIT‐Block, a JIT defect prediction method that combines multiple input semantics using changed block as the fundamental unit. We restructure the JIT‐Defects4J dataset used in previous research. We then conducted a comprehensive comparison using eleven performance metrics, including both effort‐aware and effort‐agnostic measures, against six state‐of‐the‐art baseline models. The results demonstrate that on the JIT defect prediction task, our approach outperforms the baseline models in all six metrics, showing improvements ranging from 1.5% to 800% in effort‐agnostic metrics and 0.3% to 57% in effort‐aware metrics. For the JIT defect code line localization task, our approach outperforms the baseline models in three out of five metrics, showing improvements of 11% to 140%.
ISSN:0266-4720
1468-0394
DOI:10.1111/exsy.13702