Foreseeing Reconstruction Quality of Gradient Inversion: An Optimization Perspective
Gradient inversion attacks can leak data privacy when clients share weight updates with the server in federated learning (FL). Existing studies mainly use L2 or cosine distance as the loss function for gradient matching in the attack. Our empirical investigation shows that the vulnerability ranking...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Gradient inversion attacks can leak data privacy when clients share weight
updates with the server in federated learning (FL). Existing studies mainly use
L2 or cosine distance as the loss function for gradient matching in the attack.
Our empirical investigation shows that the vulnerability ranking varies with
the loss function used. Gradient norm, which is commonly used as a
vulnerability proxy for gradient inversion attack, cannot explain this as it
remains constant regardless of the loss function for gradient matching. In this
paper, we propose a loss-aware vulnerability proxy (LAVP) for the first time.
LAVP refers to either the maximum or minimum eigenvalue of the Hessian with
respect to gradient matching loss at ground truth. This suggestion is based on
our theoretical findings regarding the local optimization of the gradient
inversion in proximity to the ground truth, which corresponds to the worst case
attack scenario. We demonstrate the effectiveness of LAVP on various
architectures and datasets, showing its consistent superiority over the
gradient norm in capturing sample vulnerabilities. The performance of each
proxy is measured in terms of Spearman's rank correlation with respect to
several similarity scores. This work will contribute to enhancing FL security
against any potential loss functions beyond L2 or cosine distance in the
future. |
---|---|
DOI: | 10.48550/arxiv.2312.12488 |