Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models
Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. While numerous backdoor attacks occur within the digital realm, their practical implementation in re...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-03 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Sze Jue Yang La, Chinh D Nguyen, Quang H Kok-Seng Wong Anh Tuan Tran Chan, Chee Seng Doan, Khoa D |
description | Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. While numerous backdoor attacks occur within the digital realm, their practical implementation in real-world prediction systems remains limited and vulnerable to disturbances in the physical world. Consequently, this limitation has given rise to the development of physical backdoor attacks, where trigger objects manifest as physical entities within the real world. However, creating the requisite dataset to train or evaluate a physical backdoor model is a daunting task, limiting the backdoor researchers and practitioners from studying such physical attack scenarios. This paper unleashes a recipe that empowers backdoor researchers to effortlessly create a malicious, physical backdoor dataset based on advances in generative modeling. Particularly, this recipe involves 3 automatic modules: suggesting the suitable physical triggers, generating the poisoned candidate samples (either by synthesizing new samples or editing existing clean samples), and finally refining for the most plausible ones. As such, it effectively mitigates the perceived complexity associated with creating a physical backdoor dataset, transforming it from a daunting task into an attainable objective. Extensive experiment results show that datasets created by our "recipe" enable adversaries to achieve an impressive attack success rate on real physical world data and exhibit similar properties compared to previous physical backdoor attack studies. This paper offers researchers a valuable toolkit for studies of physical backdoors, all within the confines of their laboratories. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2899306104</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2899306104</sourcerecordid><originalsourceid>FETCH-proquest_journals_28993061043</originalsourceid><addsrcrecordid>eNqNi10LAUEUhielCP_hlOutMbPYdefbBaW412n3YD_MMGeW-PUoP8DV29PzvDXRVFr3gihUqiE6zLmUUg2Gqt_XTYG7p_Fn4uyVmRNsz0_OEixhgkmRWutghh6ZPI9gbGBceXtBTyksHF7oYV0Ba7qTw9P3PSO6wpLMh312J9jYlEpui_oRS6bOb1uiu5jvp6vg6uytIvaH3FbOfNRBRXGs5aAnQ_1f9QZOBUVP</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2899306104</pqid></control><display><type>article</type><title>Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models</title><source>Free E- Journals</source><creator>Sze Jue Yang ; La, Chinh D ; Nguyen, Quang H ; Kok-Seng Wong ; Anh Tuan Tran ; Chan, Chee Seng ; Doan, Khoa D</creator><creatorcontrib>Sze Jue Yang ; La, Chinh D ; Nguyen, Quang H ; Kok-Seng Wong ; Anh Tuan Tran ; Chan, Chee Seng ; Doan, Khoa D</creatorcontrib><description>Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. While numerous backdoor attacks occur within the digital realm, their practical implementation in real-world prediction systems remains limited and vulnerable to disturbances in the physical world. Consequently, this limitation has given rise to the development of physical backdoor attacks, where trigger objects manifest as physical entities within the real world. However, creating the requisite dataset to train or evaluate a physical backdoor model is a daunting task, limiting the backdoor researchers and practitioners from studying such physical attack scenarios. This paper unleashes a recipe that empowers backdoor researchers to effortlessly create a malicious, physical backdoor dataset based on advances in generative modeling. Particularly, this recipe involves 3 automatic modules: suggesting the suitable physical triggers, generating the poisoned candidate samples (either by synthesizing new samples or editing existing clean samples), and finally refining for the most plausible ones. As such, it effectively mitigates the perceived complexity associated with creating a physical backdoor dataset, transforming it from a daunting task into an attainable objective. Extensive experiment results show that datasets created by our "recipe" enable adversaries to achieve an impressive attack success rate on real physical world data and exhibit similar properties compared to previous physical backdoor attack studies. This paper offers researchers a valuable toolkit for studies of physical backdoors, all within the confines of their laboratories.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Datasets ; Machine learning ; Recipes ; Synthesis</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Sze Jue Yang</creatorcontrib><creatorcontrib>La, Chinh D</creatorcontrib><creatorcontrib>Nguyen, Quang H</creatorcontrib><creatorcontrib>Kok-Seng Wong</creatorcontrib><creatorcontrib>Anh Tuan Tran</creatorcontrib><creatorcontrib>Chan, Chee Seng</creatorcontrib><creatorcontrib>Doan, Khoa D</creatorcontrib><title>Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models</title><title>arXiv.org</title><description>Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. While numerous backdoor attacks occur within the digital realm, their practical implementation in real-world prediction systems remains limited and vulnerable to disturbances in the physical world. Consequently, this limitation has given rise to the development of physical backdoor attacks, where trigger objects manifest as physical entities within the real world. However, creating the requisite dataset to train or evaluate a physical backdoor model is a daunting task, limiting the backdoor researchers and practitioners from studying such physical attack scenarios. This paper unleashes a recipe that empowers backdoor researchers to effortlessly create a malicious, physical backdoor dataset based on advances in generative modeling. Particularly, this recipe involves 3 automatic modules: suggesting the suitable physical triggers, generating the poisoned candidate samples (either by synthesizing new samples or editing existing clean samples), and finally refining for the most plausible ones. As such, it effectively mitigates the perceived complexity associated with creating a physical backdoor dataset, transforming it from a daunting task into an attainable objective. Extensive experiment results show that datasets created by our "recipe" enable adversaries to achieve an impressive attack success rate on real physical world data and exhibit similar properties compared to previous physical backdoor attack studies. This paper offers researchers a valuable toolkit for studies of physical backdoors, all within the confines of their laboratories.</description><subject>Artificial neural networks</subject><subject>Datasets</subject><subject>Machine learning</subject><subject>Recipes</subject><subject>Synthesis</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi10LAUEUhielCP_hlOutMbPYdefbBaW412n3YD_MMGeW-PUoP8DV29PzvDXRVFr3gihUqiE6zLmUUg2Gqt_XTYG7p_Fn4uyVmRNsz0_OEixhgkmRWutghh6ZPI9gbGBceXtBTyksHF7oYV0Ba7qTw9P3PSO6wpLMh312J9jYlEpui_oRS6bOb1uiu5jvp6vg6uytIvaH3FbOfNRBRXGs5aAnQ_1f9QZOBUVP</recordid><startdate>20240315</startdate><enddate>20240315</enddate><creator>Sze Jue Yang</creator><creator>La, Chinh D</creator><creator>Nguyen, Quang H</creator><creator>Kok-Seng Wong</creator><creator>Anh Tuan Tran</creator><creator>Chan, Chee Seng</creator><creator>Doan, Khoa D</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240315</creationdate><title>Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models</title><author>Sze Jue Yang ; La, Chinh D ; Nguyen, Quang H ; Kok-Seng Wong ; Anh Tuan Tran ; Chan, Chee Seng ; Doan, Khoa D</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28993061043</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Datasets</topic><topic>Machine learning</topic><topic>Recipes</topic><topic>Synthesis</topic><toplevel>online_resources</toplevel><creatorcontrib>Sze Jue Yang</creatorcontrib><creatorcontrib>La, Chinh D</creatorcontrib><creatorcontrib>Nguyen, Quang H</creatorcontrib><creatorcontrib>Kok-Seng Wong</creatorcontrib><creatorcontrib>Anh Tuan Tran</creatorcontrib><creatorcontrib>Chan, Chee Seng</creatorcontrib><creatorcontrib>Doan, Khoa D</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sze Jue Yang</au><au>La, Chinh D</au><au>Nguyen, Quang H</au><au>Kok-Seng Wong</au><au>Anh Tuan Tran</au><au>Chan, Chee Seng</au><au>Doan, Khoa D</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models</atitle><jtitle>arXiv.org</jtitle><date>2024-03-15</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Backdoor attacks, representing an emerging threat to the integrity of deep neural networks, have garnered significant attention due to their ability to compromise deep learning systems clandestinely. While numerous backdoor attacks occur within the digital realm, their practical implementation in real-world prediction systems remains limited and vulnerable to disturbances in the physical world. Consequently, this limitation has given rise to the development of physical backdoor attacks, where trigger objects manifest as physical entities within the real world. However, creating the requisite dataset to train or evaluate a physical backdoor model is a daunting task, limiting the backdoor researchers and practitioners from studying such physical attack scenarios. This paper unleashes a recipe that empowers backdoor researchers to effortlessly create a malicious, physical backdoor dataset based on advances in generative modeling. Particularly, this recipe involves 3 automatic modules: suggesting the suitable physical triggers, generating the poisoned candidate samples (either by synthesizing new samples or editing existing clean samples), and finally refining for the most plausible ones. As such, it effectively mitigates the perceived complexity associated with creating a physical backdoor dataset, transforming it from a daunting task into an attainable objective. Extensive experiment results show that datasets created by our "recipe" enable adversaries to achieve an impressive attack success rate on real physical world data and exhibit similar properties compared to previous physical backdoor attack studies. This paper offers researchers a valuable toolkit for studies of physical backdoors, all within the confines of their laboratories.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2899306104 |
source | Free E- Journals |
subjects | Artificial neural networks Datasets Machine learning Recipes Synthesis |
title | Synthesizing Physical Backdoor Datasets: An Automated Framework Leveraging Deep Generative Models |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T20%3A28%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Synthesizing%20Physical%20Backdoor%20Datasets:%20An%20Automated%20Framework%20Leveraging%20Deep%20Generative%20Models&rft.jtitle=arXiv.org&rft.au=Sze%20Jue%20Yang&rft.date=2024-03-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2899306104%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2899306104&rft_id=info:pmid/&rfr_iscdi=true |