Deployment of a change‐level software defect prediction solution into an industrial setting
Applying change‐level software defect prediction (SDP) in practice has several challenges regarding model validation techniques, data accuracy, and prediction performance consistency. A few studies report on these challenges in an industrial context. We share our experience in integrating an SDP int...
Gespeichert in:
Veröffentlicht in: | Journal of software : evolution and process 2021-11, Vol.33 (11), p.n/a |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Applying change‐level software defect prediction (SDP) in practice has several challenges regarding model validation techniques, data accuracy, and prediction performance consistency. A few studies report on these challenges in an industrial context. We share our experience in integrating an SDP into an industrial context. We investigate whether an “offline” SDP could reflect its “online” (real‐life) performance, and other deployment decisions: the model re‐training process and update period. We employ an online prediction strategy by considering the actual labels of training commits at the time of prediction and compare its performance against an offline prediction. We empirically assess the online SDP's performance with various lengths of the time gap between the train and test set and model update periods. Our online SDP's performance could successfully reach its offline performance. The time gap between the train and test commits, and model update period significantly impacts the online performance by 37% and 18% in terms of probability of detection (pd), respectively. We deploy the best SDP solution (73% pd) with an 8‐month time gap and a 3‐day update period. Contextual factors may determine the model performance in practice, its consistency, and trustworthiness. As future work, we plan to investigate the reasons for fluctuations in model performance over time.
We share our experience in integrating a change‐level software defect prediction (SDP) model into an industrial context. We empirically investigate whether an ‘offline’ SDP could reflect its ‘online’ (real‐life) performance, and other deployment decisions: The model re‐training process and update period. Our online SDP's performance could successfully reach its offline performance. The time gap between the train and test commits, and model update period significantly impact the online performance by 37% and 18% in terms of probability of detection, respectively. |
---|---|
ISSN: | 2047-7473 2047-7481 |
DOI: | 10.1002/smr.2381 |