Split Aggregation: Lightweight Privacy-Preserving Federated Learning Resistant to Byzantine Attacks

Federated Learning (FL), a distributed learning paradigm optimizing communication costs and enhancing privacy by uploading gradients instead of raw data, now confronts security challenges. It is particularly vulnerable to Byzantine poisoning attacks and potential privacy breaches via inference attac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information forensics and security 2024-01, Vol.19, p.1-1
Hauptverfasser: Lu, Zhi, Lu, SongFeng, Cui, YongQuan, Tang, XueMing, Wu, JunJun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated Learning (FL), a distributed learning paradigm optimizing communication costs and enhancing privacy by uploading gradients instead of raw data, now confronts security challenges. It is particularly vulnerable to Byzantine poisoning attacks and potential privacy breaches via inference attacks. While homomorphic encryption and secure multi-party computation have been employed to design robust FL mechanisms, these predominantly rely on Euclidean distance or median-based metrics and often fall short in comprehensively defending against advanced poisoning attacks, such as adaptive attacks. Addressing this issue, our study introduces "Split-Aggregation", a lightweight privacy-preserving FL solution capable of withstanding adaptive attacks. This method maintains a computational complexity of O ( dkN + k 3 ) and a communication overhead of O ( dN ), performing comparably to FedAvg when k = 10. Here, d represents the gradient dimension, N the number of users, and k the rank chosen during random singular value decomposition. Additionally, we utilize adaptive weight coefficients to mitigate gradient descent issues in honest users caused by non-independent and identically distributed (Non-IID) data. The proposed method's security and robustness are theoretically proven, with its complexity thoroughly analyzed. Experimental results demonstrate that at k = 10, this method surpasses the top-1 accuracy of current state-of-the-art robust privacy-preserving FL approaches. Moreover, opting for a smaller k significantly boosts efficiency with only marginal compromises in accuracy.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2024.3402993