Privacy-preserving Stochastic Gradual Learning
It is challenging for stochastic optimizations to handle large-scale sensitive data safely. Recently, Duchi et al. proposed private sampling strategy to solve privacy leakage in stochastic optimizations. However, this strategy leads to robustness degeneration, since this strategy is equal to the noi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It is challenging for stochastic optimizations to handle large-scale
sensitive data safely. Recently, Duchi et al. proposed private sampling
strategy to solve privacy leakage in stochastic optimizations. However, this
strategy leads to robustness degeneration, since this strategy is equal to the
noise injection on each gradient, which adversely affects updates of the primal
variable. To address this challenge, we introduce a robust stochastic
optimization under the framework of local privacy, which is called
Privacy-pREserving StochasTIc Gradual lEarning (PRESTIGE). PRESTIGE bridges
private updates of the primal variable (by private sampling) with the gradual
curriculum learning (CL). Specifically, the noise injection leads to the issue
of label noise, but the robust learning process of CL can combat with label
noise. Thus, PRESTIGE yields "private but robust" updates of the primal
variable on the private curriculum, namely an reordered label sequence provided
by CL. In theory, we reveal the convergence rate and maximum complexity of
PRESTIGE. Empirical results on six datasets show that, PRESTIGE achieves a good
tradeoff between privacy preservation and robustness over baselines. |
---|---|
DOI: | 10.48550/arxiv.1810.00383 |