A novel local differential privacy federated learning under multi-privacy regimes

Local differential privacy federated learning (LDP-FL) is a framework to achieve high local data privacy protection while training the model in a decentralized environment. Currently, LDP-FL’s trainings are suffering from efficiency problems due to many existing researches combine LDP and FL without...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Expert systems with applications 2023-10, Vol.227, p.120266, Article 120266
Hauptverfasser: Liu, Chun, Tian, Youliang, Tang, Jinchuan, Dang, Shuping, Chen, Gaojie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Local differential privacy federated learning (LDP-FL) is a framework to achieve high local data privacy protection while training the model in a decentralized environment. Currently, LDP-FL’s trainings are suffering from efficiency problems due to many existing researches combine LDP and FL without looking deep into the relationships between the two most important parameters, i.e., privacy budget for privacy protection and gradients for model training. In this work, we propose a novel LDP-FL under multi-privacy regimes to combat the above problems. Firstly, unlike the existing multiple privacy regimes-based LDP-FL to compute the non-unbiased global gradient, we propose an unbiased mean estimator using maximum likelihood estimation (MLE) to obtain small variance global gradients with a higher training accuracy. Secondly, to improve the efficiency of model training for multi-privacy scenarios, we design two different dynamic privacy budget allocation approaches for users to choose from. The first approach allocates the privacy budget based on the training model’s accuracy, and the second approach’s privacy budget grows linearly, avoiding the computational effort caused by the comparison operation. Finally, since directly perturbing the high-dimensional local gradients in traditional methods would lead to considerable utility loss, we propose a layered dimension selection strategy by randomly selecting the layers of gradients that take part in the noise perturbation while others remain untouched. In simulations using the handwritten MNIST and Fashion-MNIST datasets, we compare our framework with the traditional LDP-FL, simple personalized mean estimation (S-PME), and PLU-FedOA. The results demonstrate the training efficiency of our framework. •Multi-privacy estimator obtains more accurate global gradients than existing ones.•Dynamic privacy budget allocation improves model efficiency over fixed allocation.•Layered dimension selection reduces the noise perturbation on gradients.
ISSN:0957-4174
1873-6793
DOI:10.1016/j.eswa.2023.120266