FedGIG: Graph Inversion from Gradient in Federated Learning
Recent studies have shown that Federated learning (FL) is vulnerable to Gradient Inversion Attacks (GIA), which can recover private training data from shared gradients. However, existing methods are designed for dense, continuous data such as images or vectorized texts, and cannot be directly applie...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent studies have shown that Federated learning (FL) is vulnerable to
Gradient Inversion Attacks (GIA), which can recover private training data from
shared gradients. However, existing methods are designed for dense, continuous
data such as images or vectorized texts, and cannot be directly applied to
sparse and discrete graph data. This paper first explores GIA's impact on
Federated Graph Learning (FGL) and introduces Graph Inversion from Gradient in
Federated Learning (FedGIG), a novel GIA method specifically designed for
graph-structured data. FedGIG includes the adjacency matrix constraining
module, which ensures the sparsity and discreteness of the reconstructed graph
data, and the subgraph reconstruction module, which is designed to complete
missing common subgraph structures. Extensive experiments on molecular datasets
demonstrate FedGIG's superior accuracy over existing GIA techniques. |
---|---|
DOI: | 10.48550/arxiv.2412.18513 |