RAFLS: RDP-Based Adaptive Federated Learning With Shuffle Model
Federated Learning (FL) realizes distributed machine learning training via sharing model updates rather than raw data, thus ensuring data privacy. However, an attacker may infer the client's local original data from the model parameter so that original data leakage can be caused. While Differen...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on dependable and secure computing 2024-07, p.1-14 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated Learning (FL) realizes distributed machine learning training via sharing model updates rather than raw data, thus ensuring data privacy. However, an attacker may infer the client's local original data from the model parameter so that original data leakage can be caused. While Differential Privacy (DP) is designed to address data leakage issues in FL, injecting noises during training reduces model accuracy. To minimize the negative impact caused by noises on model accuracy while considering privacy protections, in this paper we propose an adaptive FL model, entitled R DP-based A daptive F ederated L earning in S huffle model (RAFLS). To ensure the dataset privacy of clients, we inject adaptive noises into the client's local model by leveraging the adaptive layer- wise adaptive sensitivity of the local model. Our approach shuffles all local model parameters in order to address privacy explosion concerns caused by high-dimensional aggregation and multiple iterations. We further propose a fine-grained model weight aggregation scheme to aggregate all local models and obtain a global model. Our experiment evaluations demonstrate the proposed RAFLS has a better performance than the state-of-the-art methods in reducing noise's impact on model accuracy while protecting data, i.e., showing that the accuracy of RAFLS increases by 1.54% than that of the baseline scheme when \epsilon = 2.0 and FashionMNIST under IID setting. |
---|---|
ISSN: | 1545-5971 1941-0018 |
DOI: | 10.1109/TDSC.2024.3429503 |