DP-GSGLD: A Bayesian optimizer inspired by differential privacy defending against privacy leakage in federated learning

Stochastic Gradient Langevin Dynamics (SGLD) is believed to preserve differential privacy as its intrinsic attribute since it obtain randomness from posterior sampling and natural noise. In this paper, we propose Differentially Private General Stochastic Gradient Langevin Dynamics (DP-GSGLD), a nove...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers & security 2024-07, Vol.142, p.103839, Article 103839
Hauptverfasser: Yang, Chengyi, Jia, Kun, Kong, Deli, Qi, Jiayin, Zhou, Aimin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Stochastic Gradient Langevin Dynamics (SGLD) is believed to preserve differential privacy as its intrinsic attribute since it obtain randomness from posterior sampling and natural noise. In this paper, we propose Differentially Private General Stochastic Gradient Langevin Dynamics (DP-GSGLD), a novel variant of SGLD which realizes gradient estimation in parameter updating through Bayesian sampling. We introduce the technique of parameter clipping and prove that DP-GSGLD satisfies the property of Differential Privacy (DP). We conduct experiments on several image datasets for defending against gradient attack that is commonly appeared in the scenario of federated learning. The results demonstrate that DP-GSGLD can decrease the time for model training and achieve higher accuracy under the same privacy level.
ISSN:0167-4048
1872-6208
DOI:10.1016/j.cose.2024.103839