Weighted distributed differential privacy ERM: Convex and non-convex
Distributed machine learning allows different parties to learn a single model over all data sets without disclosing their own data. In this paper, we propose a weighted distributed differentially private (WD-DP) empirical risk minimization (ERM) method to train a model in distributed setting, consid...
Gespeichert in:
Veröffentlicht in: | Computers & security 2021-07, Vol.106, p.102275, Article 102275 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Distributed machine learning allows different parties to learn a single model over all data sets without disclosing their own data. In this paper, we propose a weighted distributed differentially private (WD-DP) empirical risk minimization (ERM) method to train a model in distributed setting, considering different weights of different clients. For the first time, we theoretically analyze the benefits brought by weighted paradigm in distributed differentially private machine learning. Our method advances the state-of-the-art differentially private ERM methods in distributed setting. By detailed theoretical analysis, we show that in distributed setting, the noise bound and the excess empirical risk bound can be improved by considering different weights held by multiple parties. Additionally, in some situations, the constraint: strongly convexity of the loss function in ERM is not easy to achieve, so we generalize our method to the condition that the loss function is not restricted to be strongly convex but satisfies the Polyak-Łojasiewicz condition. Experiments on real data sets show that our method is more reliable and we improve the performance of distributed differentially private ERM, especially in the case that data scales on different clients are uneven. Moreover, it is an attractive result that our distributed method achieves almost the same theoretical and experimental results as previous centralized methods. |
---|---|
ISSN: | 0167-4048 1872-6208 |
DOI: | 10.1016/j.cose.2021.102275 |