Need for Speed: Taming Backdoor Attacks with Speed and Precision

Modern deep neural network models (DNNs) require extensive data for optimal performance, prompting reliance on multiple entities for the acquisition of training datasets. One prominent security threat is backdoor attacks where the adversary party poisons a small subset of training datasets to implan...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ma, Zhuo, Yang, Yilong, Liu, Yang, Yang, Tong, Liu, Xinjing, Li, Teng, Qin, Zhan
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1235
container_issue
container_start_page 1217
container_title
container_volume
creator Ma, Zhuo
Yang, Yilong
Liu, Yang
Yang, Tong
Liu, Xinjing
Li, Teng
Qin, Zhan
description Modern deep neural network models (DNNs) require extensive data for optimal performance, prompting reliance on multiple entities for the acquisition of training datasets. One prominent security threat is backdoor attacks where the adversary party poisons a small subset of training datasets to implant a backdoor into the model, leading to misclassifications during runtime for triggered samples. To mitigate the attack, many defense methods have been proposed, such as detecting and removing poisoned samples or rectifying trojaned model weights in victim DNNs. However, existing approaches suffer from notable inefficiency as they are faced with large-scale training datasets, consequently rendering these defenses impractical in the real world. In this paper, we propose a lightweight backdoor identification and removal scheme, called ReBack. In this scheme, ReBack first extracts a subset of suspicious and benign samples, and then, proceeds with a "averaging and differencing" based method to identify target label(s). Next, leveraging the identification results, ReBack invokes a novel reverse engineering method to recover the exact trigger using only basic arithmetic atoms. Our experiments demonstrate that, for ImageNet with 750 labels, ReBack can defend against backdoor attacks in around 2 hours, showcasing a speed improvement of 18.5× to 214× compared to existing methods. For backdoor removal, the attack success rate can be decreased to 0.05% owing to 99% cosine similarity of the reversed triggers. The code is online available.
doi_str_mv 10.1109/SP54263.2024.00216
format Conference Proceeding
fullrecord <record><control><sourceid>ieee_RIE</sourceid><recordid>TN_cdi_ieee_primary_10646685</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10646685</ieee_id><sourcerecordid>10646685</sourcerecordid><originalsourceid>FETCH-LOGICAL-i106t-155316fdedad407e168adbb415f4368a539f98f6870f03bce42235f37a810ae43</originalsourceid><addsrcrecordid>eNotjNtKw0AURUdBsNb-gPgwP5B4zpy5-mQt9QJFC63PZZKZ0VGblCQg_r2B-rQXm8Vi7AqhRAR3s1krKTSVAoQsAQTqEzZzxllSQIQEeMomgowqUIA5Zxd9_zlqQE5O2N1LjIGntuObw0i3fOv3uXnn977-Cu14z4dhxJ7_5OHj6HDfBL7uYp373DaX7Cz57z7O_nfK3h6W28VTsXp9fF7MV0VG0EOBShHqFGLwQYKJqK0PVSVRJUkjK3LJ2aStgQRU1VEKQSqR8RbBR0lTdn3s5hjj7tDlve9-d2Nbam0V_QFTjkhU</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Need for Speed: Taming Backdoor Attacks with Speed and Precision</title><source>IEEE Electronic Library (IEL)</source><creator>Ma, Zhuo ; Yang, Yilong ; Liu, Yang ; Yang, Tong ; Liu, Xinjing ; Li, Teng ; Qin, Zhan</creator><creatorcontrib>Ma, Zhuo ; Yang, Yilong ; Liu, Yang ; Yang, Tong ; Liu, Xinjing ; Li, Teng ; Qin, Zhan</creatorcontrib><description>Modern deep neural network models (DNNs) require extensive data for optimal performance, prompting reliance on multiple entities for the acquisition of training datasets. One prominent security threat is backdoor attacks where the adversary party poisons a small subset of training datasets to implant a backdoor into the model, leading to misclassifications during runtime for triggered samples. To mitigate the attack, many defense methods have been proposed, such as detecting and removing poisoned samples or rectifying trojaned model weights in victim DNNs. However, existing approaches suffer from notable inefficiency as they are faced with large-scale training datasets, consequently rendering these defenses impractical in the real world. In this paper, we propose a lightweight backdoor identification and removal scheme, called ReBack. In this scheme, ReBack first extracts a subset of suspicious and benign samples, and then, proceeds with a "averaging and differencing" based method to identify target label(s). Next, leveraging the identification results, ReBack invokes a novel reverse engineering method to recover the exact trigger using only basic arithmetic atoms. Our experiments demonstrate that, for ImageNet with 750 labels, ReBack can defend against backdoor attacks in around 2 hours, showcasing a speed improvement of 18.5× to 214× compared to existing methods. For backdoor removal, the attack success rate can be decreased to 0.05% owing to 99% cosine similarity of the reversed triggers. The code is online available.</description><identifier>EISSN: 2375-1207</identifier><identifier>EISBN: 9798350331301</identifier><identifier>DOI: 10.1109/SP54263.2024.00216</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Backdoor Defense ; Inspection ; Machine Learning ; Privacy ; Rendering (computer graphics) ; Reverse engineering ; Runtime ; Toxicology ; Training</subject><ispartof>2024 IEEE Symposium on Security and Privacy (SP), 2024, p.1217-1235</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10646685$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>310,311,781,785,790,791,797,27930,54763</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10646685$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ma, Zhuo</creatorcontrib><creatorcontrib>Yang, Yilong</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><creatorcontrib>Yang, Tong</creatorcontrib><creatorcontrib>Liu, Xinjing</creatorcontrib><creatorcontrib>Li, Teng</creatorcontrib><creatorcontrib>Qin, Zhan</creatorcontrib><title>Need for Speed: Taming Backdoor Attacks with Speed and Precision</title><title>2024 IEEE Symposium on Security and Privacy (SP)</title><addtitle>SP</addtitle><description>Modern deep neural network models (DNNs) require extensive data for optimal performance, prompting reliance on multiple entities for the acquisition of training datasets. One prominent security threat is backdoor attacks where the adversary party poisons a small subset of training datasets to implant a backdoor into the model, leading to misclassifications during runtime for triggered samples. To mitigate the attack, many defense methods have been proposed, such as detecting and removing poisoned samples or rectifying trojaned model weights in victim DNNs. However, existing approaches suffer from notable inefficiency as they are faced with large-scale training datasets, consequently rendering these defenses impractical in the real world. In this paper, we propose a lightweight backdoor identification and removal scheme, called ReBack. In this scheme, ReBack first extracts a subset of suspicious and benign samples, and then, proceeds with a "averaging and differencing" based method to identify target label(s). Next, leveraging the identification results, ReBack invokes a novel reverse engineering method to recover the exact trigger using only basic arithmetic atoms. Our experiments demonstrate that, for ImageNet with 750 labels, ReBack can defend against backdoor attacks in around 2 hours, showcasing a speed improvement of 18.5× to 214× compared to existing methods. For backdoor removal, the attack success rate can be decreased to 0.05% owing to 99% cosine similarity of the reversed triggers. The code is online available.</description><subject>Backdoor Defense</subject><subject>Inspection</subject><subject>Machine Learning</subject><subject>Privacy</subject><subject>Rendering (computer graphics)</subject><subject>Reverse engineering</subject><subject>Runtime</subject><subject>Toxicology</subject><subject>Training</subject><issn>2375-1207</issn><isbn>9798350331301</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><sourceid>RIE</sourceid><recordid>eNotjNtKw0AURUdBsNb-gPgwP5B4zpy5-mQt9QJFC63PZZKZ0VGblCQg_r2B-rQXm8Vi7AqhRAR3s1krKTSVAoQsAQTqEzZzxllSQIQEeMomgowqUIA5Zxd9_zlqQE5O2N1LjIGntuObw0i3fOv3uXnn977-Cu14z4dhxJ7_5OHj6HDfBL7uYp373DaX7Cz57z7O_nfK3h6W28VTsXp9fF7MV0VG0EOBShHqFGLwQYKJqK0PVSVRJUkjK3LJ2aStgQRU1VEKQSqR8RbBR0lTdn3s5hjj7tDlve9-d2Nbam0V_QFTjkhU</recordid><startdate>20240519</startdate><enddate>20240519</enddate><creator>Ma, Zhuo</creator><creator>Yang, Yilong</creator><creator>Liu, Yang</creator><creator>Yang, Tong</creator><creator>Liu, Xinjing</creator><creator>Li, Teng</creator><creator>Qin, Zhan</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240519</creationdate><title>Need for Speed: Taming Backdoor Attacks with Speed and Precision</title><author>Ma, Zhuo ; Yang, Yilong ; Liu, Yang ; Yang, Tong ; Liu, Xinjing ; Li, Teng ; Qin, Zhan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i106t-155316fdedad407e168adbb415f4368a539f98f6870f03bce42235f37a810ae43</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Backdoor Defense</topic><topic>Inspection</topic><topic>Machine Learning</topic><topic>Privacy</topic><topic>Rendering (computer graphics)</topic><topic>Reverse engineering</topic><topic>Runtime</topic><topic>Toxicology</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Ma, Zhuo</creatorcontrib><creatorcontrib>Yang, Yilong</creatorcontrib><creatorcontrib>Liu, Yang</creatorcontrib><creatorcontrib>Yang, Tong</creatorcontrib><creatorcontrib>Liu, Xinjing</creatorcontrib><creatorcontrib>Li, Teng</creatorcontrib><creatorcontrib>Qin, Zhan</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ma, Zhuo</au><au>Yang, Yilong</au><au>Liu, Yang</au><au>Yang, Tong</au><au>Liu, Xinjing</au><au>Li, Teng</au><au>Qin, Zhan</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Need for Speed: Taming Backdoor Attacks with Speed and Precision</atitle><btitle>2024 IEEE Symposium on Security and Privacy (SP)</btitle><stitle>SP</stitle><date>2024-05-19</date><risdate>2024</risdate><spage>1217</spage><epage>1235</epage><pages>1217-1235</pages><eissn>2375-1207</eissn><eisbn>9798350331301</eisbn><coden>IEEPAD</coden><abstract>Modern deep neural network models (DNNs) require extensive data for optimal performance, prompting reliance on multiple entities for the acquisition of training datasets. One prominent security threat is backdoor attacks where the adversary party poisons a small subset of training datasets to implant a backdoor into the model, leading to misclassifications during runtime for triggered samples. To mitigate the attack, many defense methods have been proposed, such as detecting and removing poisoned samples or rectifying trojaned model weights in victim DNNs. However, existing approaches suffer from notable inefficiency as they are faced with large-scale training datasets, consequently rendering these defenses impractical in the real world. In this paper, we propose a lightweight backdoor identification and removal scheme, called ReBack. In this scheme, ReBack first extracts a subset of suspicious and benign samples, and then, proceeds with a "averaging and differencing" based method to identify target label(s). Next, leveraging the identification results, ReBack invokes a novel reverse engineering method to recover the exact trigger using only basic arithmetic atoms. Our experiments demonstrate that, for ImageNet with 750 labels, ReBack can defend against backdoor attacks in around 2 hours, showcasing a speed improvement of 18.5× to 214× compared to existing methods. For backdoor removal, the attack success rate can be decreased to 0.05% owing to 99% cosine similarity of the reversed triggers. The code is online available.</abstract><pub>IEEE</pub><doi>10.1109/SP54263.2024.00216</doi><tpages>19</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2375-1207
ispartof 2024 IEEE Symposium on Security and Privacy (SP), 2024, p.1217-1235
issn 2375-1207
language eng
recordid cdi_ieee_primary_10646685
source IEEE Electronic Library (IEL)
subjects Backdoor Defense
Inspection
Machine Learning
Privacy
Rendering (computer graphics)
Reverse engineering
Runtime
Toxicology
Training
title Need for Speed: Taming Backdoor Attacks with Speed and Precision
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T10%3A24%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Need%20for%20Speed:%20Taming%20Backdoor%20Attacks%20with%20Speed%20and%20Precision&rft.btitle=2024%20IEEE%20Symposium%20on%20Security%20and%20Privacy%20(SP)&rft.au=Ma,%20Zhuo&rft.date=2024-05-19&rft.spage=1217&rft.epage=1235&rft.pages=1217-1235&rft.eissn=2375-1207&rft.coden=IEEPAD&rft_id=info:doi/10.1109/SP54263.2024.00216&rft_dat=%3Cieee_RIE%3E10646685%3C/ieee_RIE%3E%3Curl%3E%3C/url%3E&rft.eisbn=9798350331301&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10646685&rfr_iscdi=true