Differentially Private Online-to-Batch for Smooth Losses
We develop a new reduction that converts any online convex optimization algorithm suffering $O(\sqrt{T})$ regret into an $\epsilon$-differentially private stochastic convex optimization algorithm with the optimal convergence rate $\tilde O(1/\sqrt{T} + \sqrt{d}/\epsilon T)$ on smooth losses in linea...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We develop a new reduction that converts any online convex optimization
algorithm suffering $O(\sqrt{T})$ regret into an $\epsilon$-differentially
private stochastic convex optimization algorithm with the optimal convergence
rate $\tilde O(1/\sqrt{T} + \sqrt{d}/\epsilon T)$ on smooth losses in linear
time, forming a direct analogy to the classical non-private "online-to-batch"
conversion. By applying our techniques to more advanced adaptive online
algorithms, we produce adaptive differentially private counterparts whose
convergence rates depend on apriori unknown variances or parameter norms. |
---|---|
DOI: | 10.48550/arxiv.2210.06593 |