Heterogeneous Defect Prediction Algorithm Combined with Federated Sparse Compression

Heterogeneous defect prediction (HDP) constructs a defect prediction model through the source project to realize the defect tendency prediction of the target project. HDP based on federated learning can combine multi-party defect data to improve the defect prediction effect under the condition of pr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Wang, Aili, Zhao, Yinghui, Yang, Linlin, Wu, Haibin, Iwahori, Yuji
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Heterogeneous defect prediction (HDP) constructs a defect prediction model through the source project to realize the defect tendency prediction of the target project. HDP based on federated learning can combine multi-party defect data to improve the defect prediction effect under the condition of protecting privacy. However, with the increase of task complexity and model performance requirements, the number of neural network layers gradually deepens, and the model parameters also increase. increase accordingly. In a federated learning scenario with multiple model parameters, limited communication bandwidth, and multiple clients, the amount of data received by the server is very large, causing huge communication pressure and seriously affecting the overall training efficiency. In order to reduce the communication cost, this paper proposes a federated sparse compression (FSC) algorithm. First, in order to improve the generalization performance of the model, each participant performs local training based on CapsNet, and achieves better prediction performance by calculating the relative position information of feature combinations. Then the gradient parameters are encrypted based on differential privacy to ensure data security. In order to reduce the amount of communication bits, the encrypted model parameters are subjected to sparse binary compression, the model training is converted from dense calculation to sparse calculation, and Golomb encoding is selected to be sent to the server for aggregation. Finally, the server side decodes the received data, performs sparse binary compression and encoding and sends it back to each participant. Experiments on nine projects in three public databases (Relink, NASA, and AEEEM) show that FSC can effectively reduce the amount of communication bits.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3253765