Faster Convergence on Differential Privacy-Based Federated Learning
As a novel distributed machine learning approach, federated learning (FL) is proposed to train a global model while preserving data privacy. However, some studies manifest that adversaries can still recover private information from the gradients. Differential privacy (DP) is a rigorous mathematical...
Gespeichert in:
Veröffentlicht in: | IEEE internet of things journal 2024-06, Vol.11 (12), p.22578-22589 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As a novel distributed machine learning approach, federated learning (FL) is proposed to train a global model while preserving data privacy. However, some studies manifest that adversaries can still recover private information from the gradients. Differential privacy (DP) is a rigorous mathematical tool to protect records in a database against leakage. It has been widely applied in FL by perturbing the gradients. Nevertheless, while using DP in FL, the convergence performance of the global model is inevitably degraded. In this article, we implement a DP-based FL scheme, which achieves local DP (LDP) by adding well-designed Gaussian noise on the gradients before clients upload them to the server. After that, we propose two strategies to improve the convergence performance of the DP-based FL. Both methods are realized by modifying the local objective function to limit the effect of LDP noise on convergence without degrading the privacy protection level. We then provide the detailed framework which adopts the LDP scheme and two strategies. The framework on different machine learning models is tested by simulation results, which show that our framework can improve the convergence performance up to 40% faster under different noise compared with other DP-based FL. Finally, we show the theoretical convergence guarantee of our proposed framework by first presenting the expected decrease in the global loss function for one round of training and then providing the upper convergence bound after multiple communication rounds. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2024.3383226 |