Boosting Accuracy of Differentially Private Continuous Data Release for Federated Learning

Incorporating differentially private continuous data release (DPCR) into private federated learning (FL) has recently emerged as a powerful technique for enhancing accuracy. Designing an effective DPCR model is the key to improving accuracy. Still, the state-of-the-art DPCR models hinder the potenti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on information forensics and security 2024, Vol.19, p.10287-10301
Hauptverfasser: Cai, Jianping, Ye, Qingqing, Hu, Haibo, Liu, Ximeng, Fu, Yanggeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Incorporating differentially private continuous data release (DPCR) into private federated learning (FL) has recently emerged as a powerful technique for enhancing accuracy. Designing an effective DPCR model is the key to improving accuracy. Still, the state-of-the-art DPCR models hinder the potential for accuracy improvement due to insufficient privacy budget allocation and the design only for specific iteration numbers. To boost accuracy further, we develop an augmented BIT-based continuous data release (AuBCR) model, leading to demonstrable accuracy enhancements. By employing a dual-release strategy, AuBCR gains the potential to further improve accuracy, while confronting the challenge of consistent release and doubly-nested complex privacy budget allocation problem. Against this, we design an efficient optimal consistent estimation algorithm with only O(1) complexity per release. Subsequently, we introduce the (k, N) -AuBCR Model concept and design a meta-factor method. This innovation significantly reduces the optimization variables from O(T) to O\left ({{lg^{2} T}}\right) , thereby greatly enhancing the solvability of optimal privacy budget allocation and simultaneously supporting arbitrary iteration number T. Our experiments on classical datasets show that AuBCR boosts accuracy by 4.9% ~ 18.1% compared to traditional private FL and 0.4% ~ 1.2% compared to the state-of-the-art ABCRG model.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2024.3477325