FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection

Federated learning (FL) enables multiple clients to train a model without compromising sensitive data. The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training. Recently, the edge-case backdoor attack employing the tail of the data dis...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-04
Hauptverfasser: Nguyen, Thuy Dung, Nguyen, Anh Duy, Kok-Seng Wong, Pham, Huy Hieu, Nguyen, Thanh Hung, Phi Le Nguyen, Truong Thao Nguyen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Nguyen, Thuy Dung
Nguyen, Anh Duy
Kok-Seng Wong
Pham, Huy Hieu
Nguyen, Thanh Hung
Phi Le Nguyen
Truong Thao Nguyen
description Federated learning (FL) enables multiple clients to train a model without compromising sensitive data. The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training. Recently, the edge-case backdoor attack employing the tail of the data distribution has been proposed as a powerful one, raising questions about the shortfall in current defenses' robustness guarantees. Specifically, most existing defenses cannot eliminate edge-case backdoor attacks or suffer from a trade-off between backdoor-defending effectiveness and overall performance on the primary task. To tackle this challenge, we propose FedGrad, a novel backdoor-resistant defense for FL that is resistant to cutting-edge backdoor attacks, including the edge-case attack, and performs effectively under heterogeneous client data and a large number of compromised clients. FedGrad is designed as a two-layer filtering mechanism that thoroughly analyzes the ultimate layer's gradient to identify suspicious local updates and remove them from the aggregation process. We evaluate FedGrad under different attack scenarios and show that it significantly outperforms state-of-the-art defense mechanisms. Notably, FedGrad can almost 100% correctly detect the malicious participants, thus providing a significant reduction in the backdoor effect (e.g., backdoor accuracy is less than 8%) while not reducing the main accuracy on the primary task.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2808434191</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2808434191</sourcerecordid><originalsourceid>FETCH-proquest_journals_28084341913</originalsourceid><addsrcrecordid>eNqNzNEKgjAYBeARBEn5Dj90LcxNy7qryArqzq5l6NSZbbb9vn8KPUBX58D5ODPiMc7DIIkYWxDfuZZSyjZbFsfcI1Uqy4sV5R4eClUtUOkajqJ4lcZYOCCO1YHSMDppBcoS7lJYPbGssWaoG7ibQnTw7FC9RwDTnZIaHdy062WByugVmVeic9L_5ZKs03N2uga9NZ9BOsxbM1g9TjlLaBLxKNyF_D_1BWOKRrg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2808434191</pqid></control><display><type>article</type><title>FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection</title><source>Free E- Journals</source><creator>Nguyen, Thuy Dung ; Nguyen, Anh Duy ; Kok-Seng Wong ; Pham, Huy Hieu ; Nguyen, Thanh Hung ; Phi Le Nguyen ; Truong Thao Nguyen</creator><creatorcontrib>Nguyen, Thuy Dung ; Nguyen, Anh Duy ; Kok-Seng Wong ; Pham, Huy Hieu ; Nguyen, Thanh Hung ; Phi Le Nguyen ; Truong Thao Nguyen</creatorcontrib><description>Federated learning (FL) enables multiple clients to train a model without compromising sensitive data. The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training. Recently, the edge-case backdoor attack employing the tail of the data distribution has been proposed as a powerful one, raising questions about the shortfall in current defenses' robustness guarantees. Specifically, most existing defenses cannot eliminate edge-case backdoor attacks or suffer from a trade-off between backdoor-defending effectiveness and overall performance on the primary task. To tackle this challenge, we propose FedGrad, a novel backdoor-resistant defense for FL that is resistant to cutting-edge backdoor attacks, including the edge-case attack, and performs effectively under heterogeneous client data and a large number of compromised clients. FedGrad is designed as a two-layer filtering mechanism that thoroughly analyzes the ultimate layer's gradient to identify suspicious local updates and remove them from the aggregation process. We evaluate FedGrad under different attack scenarios and show that it significantly outperforms state-of-the-art defense mechanisms. Notably, FedGrad can almost 100% correctly detect the malicious participants, thus providing a significant reduction in the backdoor effect (e.g., backdoor accuracy is less than 8%) while not reducing the main accuracy on the primary task.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Clients ; Cutting resistance ; Federated learning ; Inspection</subject><ispartof>arXiv.org, 2023-04</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Nguyen, Thuy Dung</creatorcontrib><creatorcontrib>Nguyen, Anh Duy</creatorcontrib><creatorcontrib>Kok-Seng Wong</creatorcontrib><creatorcontrib>Pham, Huy Hieu</creatorcontrib><creatorcontrib>Nguyen, Thanh Hung</creatorcontrib><creatorcontrib>Phi Le Nguyen</creatorcontrib><creatorcontrib>Truong Thao Nguyen</creatorcontrib><title>FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection</title><title>arXiv.org</title><description>Federated learning (FL) enables multiple clients to train a model without compromising sensitive data. The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training. Recently, the edge-case backdoor attack employing the tail of the data distribution has been proposed as a powerful one, raising questions about the shortfall in current defenses' robustness guarantees. Specifically, most existing defenses cannot eliminate edge-case backdoor attacks or suffer from a trade-off between backdoor-defending effectiveness and overall performance on the primary task. To tackle this challenge, we propose FedGrad, a novel backdoor-resistant defense for FL that is resistant to cutting-edge backdoor attacks, including the edge-case attack, and performs effectively under heterogeneous client data and a large number of compromised clients. FedGrad is designed as a two-layer filtering mechanism that thoroughly analyzes the ultimate layer's gradient to identify suspicious local updates and remove them from the aggregation process. We evaluate FedGrad under different attack scenarios and show that it significantly outperforms state-of-the-art defense mechanisms. Notably, FedGrad can almost 100% correctly detect the malicious participants, thus providing a significant reduction in the backdoor effect (e.g., backdoor accuracy is less than 8%) while not reducing the main accuracy on the primary task.</description><subject>Clients</subject><subject>Cutting resistance</subject><subject>Federated learning</subject><subject>Inspection</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNzNEKgjAYBeARBEn5Dj90LcxNy7qryArqzq5l6NSZbbb9vn8KPUBX58D5ODPiMc7DIIkYWxDfuZZSyjZbFsfcI1Uqy4sV5R4eClUtUOkajqJ4lcZYOCCO1YHSMDppBcoS7lJYPbGssWaoG7ibQnTw7FC9RwDTnZIaHdy062WByugVmVeic9L_5ZKs03N2uga9NZ9BOsxbM1g9TjlLaBLxKNyF_D_1BWOKRrg</recordid><startdate>20230429</startdate><enddate>20230429</enddate><creator>Nguyen, Thuy Dung</creator><creator>Nguyen, Anh Duy</creator><creator>Kok-Seng Wong</creator><creator>Pham, Huy Hieu</creator><creator>Nguyen, Thanh Hung</creator><creator>Phi Le Nguyen</creator><creator>Truong Thao Nguyen</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230429</creationdate><title>FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection</title><author>Nguyen, Thuy Dung ; Nguyen, Anh Duy ; Kok-Seng Wong ; Pham, Huy Hieu ; Nguyen, Thanh Hung ; Phi Le Nguyen ; Truong Thao Nguyen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28084341913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Clients</topic><topic>Cutting resistance</topic><topic>Federated learning</topic><topic>Inspection</topic><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Thuy Dung</creatorcontrib><creatorcontrib>Nguyen, Anh Duy</creatorcontrib><creatorcontrib>Kok-Seng Wong</creatorcontrib><creatorcontrib>Pham, Huy Hieu</creatorcontrib><creatorcontrib>Nguyen, Thanh Hung</creatorcontrib><creatorcontrib>Phi Le Nguyen</creatorcontrib><creatorcontrib>Truong Thao Nguyen</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nguyen, Thuy Dung</au><au>Nguyen, Anh Duy</au><au>Kok-Seng Wong</au><au>Pham, Huy Hieu</au><au>Nguyen, Thanh Hung</au><au>Phi Le Nguyen</au><au>Truong Thao Nguyen</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection</atitle><jtitle>arXiv.org</jtitle><date>2023-04-29</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Federated learning (FL) enables multiple clients to train a model without compromising sensitive data. The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training. Recently, the edge-case backdoor attack employing the tail of the data distribution has been proposed as a powerful one, raising questions about the shortfall in current defenses' robustness guarantees. Specifically, most existing defenses cannot eliminate edge-case backdoor attacks or suffer from a trade-off between backdoor-defending effectiveness and overall performance on the primary task. To tackle this challenge, we propose FedGrad, a novel backdoor-resistant defense for FL that is resistant to cutting-edge backdoor attacks, including the edge-case attack, and performs effectively under heterogeneous client data and a large number of compromised clients. FedGrad is designed as a two-layer filtering mechanism that thoroughly analyzes the ultimate layer's gradient to identify suspicious local updates and remove them from the aggregation process. We evaluate FedGrad under different attack scenarios and show that it significantly outperforms state-of-the-art defense mechanisms. Notably, FedGrad can almost 100% correctly detect the malicious participants, thus providing a significant reduction in the backdoor effect (e.g., backdoor accuracy is less than 8%) while not reducing the main accuracy on the primary task.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2808434191
source Free E- Journals
subjects Clients
Cutting resistance
Federated learning
Inspection
title FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local Ultimate Gradients Inspection
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T19%3A14%3A39IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=FedGrad:%20Mitigating%20Backdoor%20Attacks%20in%20Federated%20Learning%20Through%20Local%20Ultimate%20Gradients%20Inspection&rft.jtitle=arXiv.org&rft.au=Nguyen,%20Thuy%20Dung&rft.date=2023-04-29&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2808434191%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2808434191&rft_id=info:pmid/&rfr_iscdi=true