FedGIG: Graph Inversion from Gradient in Federated Learning

Recent studies have shown that Federated learning (FL) is vulnerable to Gradient Inversion Attacks (GIA), which can recover private training data from shared gradients. However, existing methods are designed for dense, continuous data such as images or vectorized texts, and cannot be directly applie...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Xiao, Tianzhe, Li, Yichen, Qi, Yining, Wang, Haozhao, Li, Ruixuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Xiao, Tianzhe
Li, Yichen
Qi, Yining
Wang, Haozhao
Li, Ruixuan
description Recent studies have shown that Federated learning (FL) is vulnerable to Gradient Inversion Attacks (GIA), which can recover private training data from shared gradients. However, existing methods are designed for dense, continuous data such as images or vectorized texts, and cannot be directly applied to sparse and discrete graph data. This paper first explores GIA's impact on Federated Graph Learning (FGL) and introduces Graph Inversion from Gradient in Federated Learning (FedGIG), a novel GIA method specifically designed for graph-structured data. FedGIG includes the adjacency matrix constraining module, which ensures the sparsity and discreteness of the reconstructed graph data, and the subgraph reconstruction module, which is designed to complete missing common subgraph structures. Extensive experiments on molecular datasets demonstrate FedGIG's superior accuracy over existing GIA techniques.
doi_str_mv 10.48550/arxiv.2412.18513
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_18513</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_18513</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_185133</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jO0MDU05mSwdktNcfd0t1JwL0osyFDwzCtLLSrOzM9TSCvKzwUJpmSm5pUoZOYpABWmFiWWpKYo-KQmFuVl5qXzMLCmJeYUp_JCaW4GeTfXEGcPXbA18QVFmbmJRZXxIOviwdYZE1YBAETFM9w</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>FedGIG: Graph Inversion from Gradient in Federated Learning</title><source>arXiv.org</source><creator>Xiao, Tianzhe ; Li, Yichen ; Qi, Yining ; Wang, Haozhao ; Li, Ruixuan</creator><creatorcontrib>Xiao, Tianzhe ; Li, Yichen ; Qi, Yining ; Wang, Haozhao ; Li, Ruixuan</creatorcontrib><description>Recent studies have shown that Federated learning (FL) is vulnerable to Gradient Inversion Attacks (GIA), which can recover private training data from shared gradients. However, existing methods are designed for dense, continuous data such as images or vectorized texts, and cannot be directly applied to sparse and discrete graph data. This paper first explores GIA's impact on Federated Graph Learning (FGL) and introduces Graph Inversion from Gradient in Federated Learning (FedGIG), a novel GIA method specifically designed for graph-structured data. FedGIG includes the adjacency matrix constraining module, which ensures the sparsity and discreteness of the reconstructed graph data, and the subgraph reconstruction module, which is designed to complete missing common subgraph structures. Extensive experiments on molecular datasets demonstrate FedGIG's superior accuracy over existing GIA techniques.</description><identifier>DOI: 10.48550/arxiv.2412.18513</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2024-12</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.18513$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.18513$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Xiao, Tianzhe</creatorcontrib><creatorcontrib>Li, Yichen</creatorcontrib><creatorcontrib>Qi, Yining</creatorcontrib><creatorcontrib>Wang, Haozhao</creatorcontrib><creatorcontrib>Li, Ruixuan</creatorcontrib><title>FedGIG: Graph Inversion from Gradient in Federated Learning</title><description>Recent studies have shown that Federated learning (FL) is vulnerable to Gradient Inversion Attacks (GIA), which can recover private training data from shared gradients. However, existing methods are designed for dense, continuous data such as images or vectorized texts, and cannot be directly applied to sparse and discrete graph data. This paper first explores GIA's impact on Federated Graph Learning (FGL) and introduces Graph Inversion from Gradient in Federated Learning (FedGIG), a novel GIA method specifically designed for graph-structured data. FedGIG includes the adjacency matrix constraining module, which ensures the sparsity and discreteness of the reconstructed graph data, and the subgraph reconstruction module, which is designed to complete missing common subgraph structures. Extensive experiments on molecular datasets demonstrate FedGIG's superior accuracy over existing GIA techniques.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jO0MDU05mSwdktNcfd0t1JwL0osyFDwzCtLLSrOzM9TSCvKzwUJpmSm5pUoZOYpABWmFiWWpKYo-KQmFuVl5qXzMLCmJeYUp_JCaW4GeTfXEGcPXbA18QVFmbmJRZXxIOviwdYZE1YBAETFM9w</recordid><startdate>20241224</startdate><enddate>20241224</enddate><creator>Xiao, Tianzhe</creator><creator>Li, Yichen</creator><creator>Qi, Yining</creator><creator>Wang, Haozhao</creator><creator>Li, Ruixuan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241224</creationdate><title>FedGIG: Graph Inversion from Gradient in Federated Learning</title><author>Xiao, Tianzhe ; Li, Yichen ; Qi, Yining ; Wang, Haozhao ; Li, Ruixuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_185133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Xiao, Tianzhe</creatorcontrib><creatorcontrib>Li, Yichen</creatorcontrib><creatorcontrib>Qi, Yining</creatorcontrib><creatorcontrib>Wang, Haozhao</creatorcontrib><creatorcontrib>Li, Ruixuan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xiao, Tianzhe</au><au>Li, Yichen</au><au>Qi, Yining</au><au>Wang, Haozhao</au><au>Li, Ruixuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>FedGIG: Graph Inversion from Gradient in Federated Learning</atitle><date>2024-12-24</date><risdate>2024</risdate><abstract>Recent studies have shown that Federated learning (FL) is vulnerable to Gradient Inversion Attacks (GIA), which can recover private training data from shared gradients. However, existing methods are designed for dense, continuous data such as images or vectorized texts, and cannot be directly applied to sparse and discrete graph data. This paper first explores GIA's impact on Federated Graph Learning (FGL) and introduces Graph Inversion from Gradient in Federated Learning (FedGIG), a novel GIA method specifically designed for graph-structured data. FedGIG includes the adjacency matrix constraining module, which ensures the sparsity and discreteness of the reconstructed graph data, and the subgraph reconstruction module, which is designed to complete missing common subgraph structures. Extensive experiments on molecular datasets demonstrate FedGIG's superior accuracy over existing GIA techniques.</abstract><doi>10.48550/arxiv.2412.18513</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.18513
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_18513
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title FedGIG: Graph Inversion from Gradient in Federated Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T11%3A37%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=FedGIG:%20Graph%20Inversion%20from%20Gradient%20in%20Federated%20Learning&rft.au=Xiao,%20Tianzhe&rft.date=2024-12-24&rft_id=info:doi/10.48550/arxiv.2412.18513&rft_dat=%3Carxiv_GOX%3E2412_18513%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true