Stealing Training Graphs from Graph Neural Networks
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks. The training of GNNs, especially on specialized tasks such as bioinformatics, demands extensive expert annotations, which are expensive and usually contain sensitive information of data providers. The trai...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph Neural Networks (GNNs) have shown promising results in modeling graphs
in various tasks. The training of GNNs, especially on specialized tasks such as
bioinformatics, demands extensive expert annotations, which are expensive and
usually contain sensitive information of data providers. The trained GNN models
are often shared for deployment in the real world. As neural networks can
memorize the training samples, the model parameters of GNNs have a high risk of
leaking private training data. Our theoretical analysis shows the strong
connections between trained GNN parameters and the training graphs used,
confirming the training graph leakage issue. However, explorations into
training data leakage from trained GNNs are rather limited. Therefore, we
investigate a novel problem of stealing graphs from trained GNNs. To obtain
high-quality graphs that resemble the target training set, a graph diffusion
model with diffusion noise optimization is deployed as a graph generator.
Furthermore, we propose a selection method that effectively leverages GNN model
parameters to identify training graphs from samples generated by the graph
diffusion model. Extensive experiments on real-world datasets demonstrate the
effectiveness of the proposed framework in stealing training graphs from the
trained GNN. |
---|---|
DOI: | 10.48550/arxiv.2411.11197 |