Gradient Inversion Attack on Graph Neural Networks
Graph federated learning is of essential importance for training over large graph datasets while protecting data privacy, where each client stores a subset of local graph data, while the server collects the local gradients and broadcasts only the aggregated gradients. Recent studies reveal that a ma...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph federated learning is of essential importance for training over large
graph datasets while protecting data privacy, where each client stores a subset
of local graph data, while the server collects the local gradients and
broadcasts only the aggregated gradients. Recent studies reveal that a
malicious attacker can steal private image data from gradient exchanging of
neural networks during federated learning. However, none of the existing works
have studied the vulnerability of graph data and graph neural networks under
such attack. To answer this question, the present paper studies the problem of
whether private data can be recovered from leaked gradients in both node
classification and graph classification tasks and { proposes a novel attack
named Graph Leakage from Gradients (GLG)}. Two widely-used GNN frameworks are
analyzed, namely GCN and GraphSAGE. The effects of different model settings on
recovery are extensively discussed. Through theoretical analysis and empirical
validation, it is shown that parts of the graph data can be leaked from the
gradients. |
---|---|
DOI: | 10.48550/arxiv.2411.19440 |