Differential Privacy-Enabled Multi-Party Learning with Dynamic Privacy Budget Allocating Strategy

As one of the promising paradigms of decentralized machine learning, multi-party learning has attracted increasing attention, owing to its capability of preventing the privacy of participants from being directly exposed to adversaries. Multi-party learning enables participants to train their model l...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2023-02, Vol.12 (3), p.658
Hauptverfasser: Pan, Ke, Feng, Kaiyuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:As one of the promising paradigms of decentralized machine learning, multi-party learning has attracted increasing attention, owing to its capability of preventing the privacy of participants from being directly exposed to adversaries. Multi-party learning enables participants to train their model locally without uploading private data to a server. However, recent studies have shown that adversaries may launch a series of attacks on learning models and extract private information about participants by analyzing the shared parameters. Moreover, existing privacy-preserving multi-party learning approaches consume higher total privacy budgets, which poses a considerable challenge to the compromise between privacy guarantees and model utility. To address this issue, this paper explores an adaptive differentially private multi-party learning framework, which incorporates zero-concentrated differential privacy technique into multi-party learning to get rid of privacy threats, and offers sharper quantitative results. We further design a dynamic privacy budget allocating strategy to alleviate the high accumulation of total privacy budgets and provide better privacy guarantees, without compromising the model’s utility. We inject more noise into model parameters in the early stages of model training and gradually reduce the volume of noise as the direction of gradient descent becomes more accurate. Theoretical analysis and extensive experiments on benchmark datasets validated that our approach could effectively improve the model’s performance with less privacy loss.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics12030658