Private and Federated Stochastic Convex Optimization: Efficient Strategies for Centralized Systems
This paper addresses the challenge of preserving privacy in Federated Learning (FL) within centralized systems, focusing on both trusted and untrusted server scenarios. We analyze this setting within the Stochastic Convex Optimization (SCO) framework, and devise methods that ensure Differential Priv...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper addresses the challenge of preserving privacy in Federated
Learning (FL) within centralized systems, focusing on both trusted and
untrusted server scenarios. We analyze this setting within the Stochastic
Convex Optimization (SCO) framework, and devise methods that ensure
Differential Privacy (DP) while maintaining optimal convergence rates for
homogeneous and heterogeneous data distributions. Our approach, based on a
recent stochastic optimization technique, offers linear computational
complexity, comparable to non-private FL methods, and reduced gradient
obfuscation. This work enhances the practicality of DP in FL, balancing
privacy, efficiency, and robustness in a variety of server trust environment. |
---|---|
DOI: | 10.48550/arxiv.2407.12396 |