Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning

The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently propo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dahlgaard, Mads Emil, Jørgensen, Morten Wehlast, Fuglsang, Niels Asp, Nassar, Hiba
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Dahlgaard, Mads Emil
Jørgensen, Morten Wehlast
Fuglsang, Niels Asp
Nassar, Hiba
description The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.
doi_str_mv 10.48550/arxiv.2204.13808
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2204_13808</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2204_13808</sourcerecordid><originalsourceid>FETCH-LOGICAL-a678-e729baa38678ba84a0ad40ce07cf46679b9ae74dc25b9468fd832b15dc7d8af43</originalsourceid><addsrcrecordid>eNotj81OhDAUhbtxYUYfwJV9AbDQQsuSEEdJMJOY2ZNLe4uNTDEFjPP2Arq6OTfnJx8hDwmLhcoy9gThx33HacpEnHDF1C3xpYfhOjnf0_kDae3tsKDXSEdLy3kG_Umr0VvXLwFmN_qJjn53vqNe1RwWvb03-xsap2Gg9QV6nKjz9IgG1xga2iAEv47ckRsLw4T3__dAzsfnc_UaNaeXuiqbCHKpIpRp0QFwtYoOlAAGRjCNTGor8lwWXQEohdFp1hUiV9YonnZJZrQ0CqzgB_L4V7sDt1_BXSBc2w283cH5L0ZdVTM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning</title><source>arXiv.org</source><creator>Dahlgaard, Mads Emil ; Jørgensen, Morten Wehlast ; Fuglsang, Niels Asp ; Nassar, Hiba</creator><creatorcontrib>Dahlgaard, Mads Emil ; Jørgensen, Morten Wehlast ; Fuglsang, Niels Asp ; Nassar, Hiba</creatorcontrib><description>The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.</description><identifier>DOI: 10.48550/arxiv.2204.13808</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2022-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2204.13808$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2204.13808$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dahlgaard, Mads Emil</creatorcontrib><creatorcontrib>Jørgensen, Morten Wehlast</creatorcontrib><creatorcontrib>Fuglsang, Niels Asp</creatorcontrib><creatorcontrib>Nassar, Hiba</creatorcontrib><title>Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning</title><description>The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OhDAUhbtxYUYfwJV9AbDQQsuSEEdJMJOY2ZNLe4uNTDEFjPP2Arq6OTfnJx8hDwmLhcoy9gThx33HacpEnHDF1C3xpYfhOjnf0_kDae3tsKDXSEdLy3kG_Umr0VvXLwFmN_qJjn53vqNe1RwWvb03-xsap2Gg9QV6nKjz9IgG1xga2iAEv47ckRsLw4T3__dAzsfnc_UaNaeXuiqbCHKpIpRp0QFwtYoOlAAGRjCNTGor8lwWXQEohdFp1hUiV9YonnZJZrQ0CqzgB_L4V7sDt1_BXSBc2w283cH5L0ZdVTM</recordid><startdate>20220425</startdate><enddate>20220425</enddate><creator>Dahlgaard, Mads Emil</creator><creator>Jørgensen, Morten Wehlast</creator><creator>Fuglsang, Niels Asp</creator><creator>Nassar, Hiba</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220425</creationdate><title>Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning</title><author>Dahlgaard, Mads Emil ; Jørgensen, Morten Wehlast ; Fuglsang, Niels Asp ; Nassar, Hiba</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a678-e729baa38678ba84a0ad40ce07cf46679b9ae74dc25b9468fd832b15dc7d8af43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Dahlgaard, Mads Emil</creatorcontrib><creatorcontrib>Jørgensen, Morten Wehlast</creatorcontrib><creatorcontrib>Fuglsang, Niels Asp</creatorcontrib><creatorcontrib>Nassar, Hiba</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dahlgaard, Mads Emil</au><au>Jørgensen, Morten Wehlast</au><au>Fuglsang, Niels Asp</au><au>Nassar, Hiba</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning</atitle><date>2022-04-25</date><risdate>2022</risdate><abstract>The idea of federated learning is to train deep neural network models collaboratively and share them with multiple participants without exposing their private training data to each other. This is highly attractive in the medical domain due to patients' privacy records. However, a recently proposed method called Deep Leakage from Gradients enables attackers to reconstruct data from shared gradients. This study shows how easy it is to reconstruct images for different data initialization schemes and distance measures. We show how data and model architecture influence the optimal choice of initialization scheme and distance measure configurations when working with single images. We demonstrate that the choice of initialization scheme and distance measure can significantly increase convergence speed and quality. Furthermore, we find that the optimal attack configuration depends largely on the nature of the target image distribution and the complexity of the model architecture.</abstract><doi>10.48550/arxiv.2204.13808</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2204.13808
ispartof
issn
language eng
recordid cdi_arxiv_primary_2204_13808
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Cryptography and Security
Computer Science - Learning
title Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T00%3A45%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Analysing%20the%20Influence%20of%20Attack%20Configurations%20on%20the%20Reconstruction%20of%20Medical%20Images%20in%20Federated%20Learning&rft.au=Dahlgaard,%20Mads%20Emil&rft.date=2022-04-25&rft_id=info:doi/10.48550/arxiv.2204.13808&rft_dat=%3Carxiv_GOX%3E2204_13808%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true