A Verifiable Privacy-Preserving Federated Learning Framework Against Collusion Attacks
Most of the current federated learning schemes aimed at safeguarding privacy exhibit vulnerability to collusion attacks and lack a verification mechanism for participants to consolidate the aggregation results of the parameter server, leading to privacy breaches for users and inaccurate model traini...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on mobile computing 2024-12, p.1-17 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Magazinearticle |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Most of the current federated learning schemes aimed at safeguarding privacy exhibit vulnerability to collusion attacks and lack a verification mechanism for participants to consolidate the aggregation results of the parameter server, leading to privacy breaches for users and inaccurate model training outcomes. In order to address these issues, we propose a verifiable privacy-preserving federated learning framework against collusion attacks. Primarily, the federated learning scheme is reconfigured utilizing the ElGamal encryption algorithm, which effectively safeguards the data privacy of participants in scenarios involving collusion between certain participants and servers. Additionally, the introduction of the assistant server can realize the joint decryption of the gradient ciphertext by the non-collusive parameter server and assistant server, which can effectively resist the internal attack of a single parameter server model in the process of data upload. Thirdly, this scheme designs a verification mechanism that enables participants to effectively verify the accuracy and integrality of the parameter server's aggregated results, preventing the parameter server from returning incorrect aggregation results to participants. Experimental results and performance analysis demonstrate that our proposed scheme not only fortifies security measures but also upholds the precision of model training, surpassing the security and correctness of many existing methodologies. |
---|---|
ISSN: | 1536-1233 1558-0660 |
DOI: | 10.1109/TMC.2024.3516119 |