Analyzing the robustness of decentralized horizontal and vertical federated learning architectures in a non-IID scenario

Federated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants could launch adversarial attacks against the training proces...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2024-04, Vol.54 (8), p.6637-6653
Hauptverfasser: Sánchez Sánchez, Pedro Miguel, Huertas Celdrán, Alberto, Martínez Pérez, Enrique Tomás, Demeter, Daniel, Bovet, Gérôme, Martínez Pérez, Gregorio, Stiller, Burkhard
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 6653
container_issue 8
container_start_page 6637
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 54
creator Sánchez Sánchez, Pedro Miguel
Huertas Celdrán, Alberto
Martínez Pérez, Enrique Tomás
Demeter, Daniel
Bovet, Gérôme
Martínez Pérez, Gregorio
Stiller, Burkhard
description Federated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants could launch adversarial attacks against the training process. Previous research has examined the robustness of horizontal FL scenarios under various attacks. However, there is a lack of research evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Therefore, this study proposes three decentralized FL architectures: HoriChain, VertiChain, and VertiComb. These architectures feature different neural networks and training protocols suitable for horizontal and vertical scenarios. Subsequently, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to assess the performance of the three architectures. Finally, a series of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning methods, including image watermarks and gradient poisoning adversarial attacks. The experiments demonstrate that while specific configurations of both attacks can undermine the classification performance of the architectures, HoriChain is the most robust one.
doi_str_mv 10.1007/s10489-024-05510-1
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3068494285</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3068494285</sourcerecordid><originalsourceid>FETCH-LOGICAL-c314t-1274bd3e4549874e6dab79651c409bd58ea3927f91c9d1344f276a453427aca33</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wFPA82qyyW42x1K_CgUvCt5CNjvbpqxJTbJi--uNVvDmaRjmeV-YB6FLSq4pIeImUsIbWZCSF6SqKCnoEZrQSrBCcCmO0YTIfKpr-XqKzmLcEEIYI3SCPmdOD7u9dSuc1oCDb8eYHMSIfY87MOBS0IPdQ4fXPti9d0kPWLsOf0BI1uSlhw6CTpkYQAf3XaWDWdsEJo0BIrYOa-y8KxaLWxxzpQ7Wn6OTXg8RLn7nFL3c3z3PH4vl08NiPlsWhlGeCloK3nYMeMVlIzjUnW6FrCtqOJFtVzWgmSxFL6mRHWWc96WoNa8YL4U2mrEpujr0boN_HyEmtfFjyE9HxUjdcMnLpspUeaBM8DEG6NU22DcddooS9W1YHQyrbFj9GFY0h9ghFDPsVhD-qv9JfQH_EX_Z</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3068494285</pqid></control><display><type>article</type><title>Analyzing the robustness of decentralized horizontal and vertical federated learning architectures in a non-IID scenario</title><source>SpringerLink Journals - AutoHoldings</source><creator>Sánchez Sánchez, Pedro Miguel ; Huertas Celdrán, Alberto ; Martínez Pérez, Enrique Tomás ; Demeter, Daniel ; Bovet, Gérôme ; Martínez Pérez, Gregorio ; Stiller, Burkhard</creator><creatorcontrib>Sánchez Sánchez, Pedro Miguel ; Huertas Celdrán, Alberto ; Martínez Pérez, Enrique Tomás ; Demeter, Daniel ; Bovet, Gérôme ; Martínez Pérez, Gregorio ; Stiller, Burkhard</creatorcontrib><description>Federated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants could launch adversarial attacks against the training process. Previous research has examined the robustness of horizontal FL scenarios under various attacks. However, there is a lack of research evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Therefore, this study proposes three decentralized FL architectures: HoriChain, VertiChain, and VertiComb. These architectures feature different neural networks and training protocols suitable for horizontal and vertical scenarios. Subsequently, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to assess the performance of the three architectures. Finally, a series of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning methods, including image watermarks and gradient poisoning adversarial attacks. The experiments demonstrate that while specific configurations of both attacks can undermine the classification performance of the architectures, HoriChain is the most robust one.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-024-05510-1</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Computer Science ; Datasets ; Deep learning ; Experiments ; Federated learning ; Handwriting ; Machine learning ; Machines ; Manufacturing ; Mechanical Engineering ; Neural networks ; Poisoning ; Poisons ; Privacy ; Processes ; Robustness</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2024-04, Vol.54 (8), p.6637-6653</ispartof><rights>The Author(s) 2024</rights><rights>The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c314t-1274bd3e4549874e6dab79651c409bd58ea3927f91c9d1344f276a453427aca33</cites><orcidid>0000-0002-6444-2102</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10489-024-05510-1$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10489-024-05510-1$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Sánchez Sánchez, Pedro Miguel</creatorcontrib><creatorcontrib>Huertas Celdrán, Alberto</creatorcontrib><creatorcontrib>Martínez Pérez, Enrique Tomás</creatorcontrib><creatorcontrib>Demeter, Daniel</creatorcontrib><creatorcontrib>Bovet, Gérôme</creatorcontrib><creatorcontrib>Martínez Pérez, Gregorio</creatorcontrib><creatorcontrib>Stiller, Burkhard</creatorcontrib><title>Analyzing the robustness of decentralized horizontal and vertical federated learning architectures in a non-IID scenario</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>Federated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants could launch adversarial attacks against the training process. Previous research has examined the robustness of horizontal FL scenarios under various attacks. However, there is a lack of research evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Therefore, this study proposes three decentralized FL architectures: HoriChain, VertiChain, and VertiComb. These architectures feature different neural networks and training protocols suitable for horizontal and vertical scenarios. Subsequently, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to assess the performance of the three architectures. Finally, a series of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning methods, including image watermarks and gradient poisoning adversarial attacks. The experiments demonstrate that while specific configurations of both attacks can undermine the classification performance of the architectures, HoriChain is the most robust one.</description><subject>Artificial Intelligence</subject><subject>Computer Science</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Experiments</subject><subject>Federated learning</subject><subject>Handwriting</subject><subject>Machine learning</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Neural networks</subject><subject>Poisoning</subject><subject>Poisons</subject><subject>Privacy</subject><subject>Processes</subject><subject>Robustness</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><recordid>eNp9kE1LAzEQhoMoWKt_wFPA82qyyW42x1K_CgUvCt5CNjvbpqxJTbJi--uNVvDmaRjmeV-YB6FLSq4pIeImUsIbWZCSF6SqKCnoEZrQSrBCcCmO0YTIfKpr-XqKzmLcEEIYI3SCPmdOD7u9dSuc1oCDb8eYHMSIfY87MOBS0IPdQ4fXPti9d0kPWLsOf0BI1uSlhw6CTpkYQAf3XaWDWdsEJo0BIrYOa-y8KxaLWxxzpQ7Wn6OTXg8RLn7nFL3c3z3PH4vl08NiPlsWhlGeCloK3nYMeMVlIzjUnW6FrCtqOJFtVzWgmSxFL6mRHWWc96WoNa8YL4U2mrEpujr0boN_HyEmtfFjyE9HxUjdcMnLpspUeaBM8DEG6NU22DcddooS9W1YHQyrbFj9GFY0h9ghFDPsVhD-qv9JfQH_EX_Z</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Sánchez Sánchez, Pedro Miguel</creator><creator>Huertas Celdrán, Alberto</creator><creator>Martínez Pérez, Enrique Tomás</creator><creator>Demeter, Daniel</creator><creator>Bovet, Gérôme</creator><creator>Martínez Pérez, Gregorio</creator><creator>Stiller, Burkhard</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-6444-2102</orcidid></search><sort><creationdate>20240401</creationdate><title>Analyzing the robustness of decentralized horizontal and vertical federated learning architectures in a non-IID scenario</title><author>Sánchez Sánchez, Pedro Miguel ; Huertas Celdrán, Alberto ; Martínez Pérez, Enrique Tomás ; Demeter, Daniel ; Bovet, Gérôme ; Martínez Pérez, Gregorio ; Stiller, Burkhard</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c314t-1274bd3e4549874e6dab79651c409bd58ea3927f91c9d1344f276a453427aca33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>Computer Science</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Experiments</topic><topic>Federated learning</topic><topic>Handwriting</topic><topic>Machine learning</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Neural networks</topic><topic>Poisoning</topic><topic>Poisons</topic><topic>Privacy</topic><topic>Processes</topic><topic>Robustness</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sánchez Sánchez, Pedro Miguel</creatorcontrib><creatorcontrib>Huertas Celdrán, Alberto</creatorcontrib><creatorcontrib>Martínez Pérez, Enrique Tomás</creatorcontrib><creatorcontrib>Demeter, Daniel</creatorcontrib><creatorcontrib>Bovet, Gérôme</creatorcontrib><creatorcontrib>Martínez Pérez, Gregorio</creatorcontrib><creatorcontrib>Stiller, Burkhard</creatorcontrib><collection>Springer_OA刊</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sánchez Sánchez, Pedro Miguel</au><au>Huertas Celdrán, Alberto</au><au>Martínez Pérez, Enrique Tomás</au><au>Demeter, Daniel</au><au>Bovet, Gérôme</au><au>Martínez Pérez, Gregorio</au><au>Stiller, Burkhard</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Analyzing the robustness of decentralized horizontal and vertical federated learning architectures in a non-IID scenario</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2024-04-01</date><risdate>2024</risdate><volume>54</volume><issue>8</issue><spage>6637</spage><epage>6653</epage><pages>6637-6653</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>Federated learning (FL) enables participants to collaboratively train machine and deep learning models while safeguarding data privacy. However, the FL paradigm still has drawbacks that affect its trustworthiness, as malicious participants could launch adversarial attacks against the training process. Previous research has examined the robustness of horizontal FL scenarios under various attacks. However, there is a lack of research evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Therefore, this study proposes three decentralized FL architectures: HoriChain, VertiChain, and VertiComb. These architectures feature different neural networks and training protocols suitable for horizontal and vertical scenarios. Subsequently, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to assess the performance of the three architectures. Finally, a series of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning methods, including image watermarks and gradient poisoning adversarial attacks. The experiments demonstrate that while specific configurations of both attacks can undermine the classification performance of the architectures, HoriChain is the most robust one.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-024-05510-1</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0002-6444-2102</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2024-04, Vol.54 (8), p.6637-6653
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_3068494285
source SpringerLink Journals - AutoHoldings
subjects Artificial Intelligence
Computer Science
Datasets
Deep learning
Experiments
Federated learning
Handwriting
Machine learning
Machines
Manufacturing
Mechanical Engineering
Neural networks
Poisoning
Poisons
Privacy
Processes
Robustness
title Analyzing the robustness of decentralized horizontal and vertical federated learning architectures in a non-IID scenario
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T07%3A00%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Analyzing%20the%20robustness%20of%20decentralized%20horizontal%20and%20vertical%20federated%20learning%20architectures%20in%20a%20non-IID%20scenario&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=S%C3%A1nchez%20S%C3%A1nchez,%20Pedro%20Miguel&rft.date=2024-04-01&rft.volume=54&rft.issue=8&rft.spage=6637&rft.epage=6653&rft.pages=6637-6653&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-024-05510-1&rft_dat=%3Cproquest_cross%3E3068494285%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3068494285&rft_id=info:pmid/&rfr_iscdi=true