BATG: A Backdoor Attack Method Based on Trigger Generation

Backdoor attacks aim to implant hidden backdoors into Deep Neural Networks (DNNs) so that the victim models perform well on clean images, whereas their predictions would be maliciously changed on poisoned images. However, most existing backdoor attacks lack the invisibility and robustness required f...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2024-12, Vol.13 (24), p.5031
Hauptverfasser: Tang, Weixuan, Xie, Haoke, Rao, Yuan, Long, Min, Qi, Tao, Zhou, Zhili
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 24
container_start_page 5031
container_title Electronics (Basel)
container_volume 13
creator Tang, Weixuan
Xie, Haoke
Rao, Yuan
Long, Min
Qi, Tao
Zhou, Zhili
description Backdoor attacks aim to implant hidden backdoors into Deep Neural Networks (DNNs) so that the victim models perform well on clean images, whereas their predictions would be maliciously changed on poisoned images. However, most existing backdoor attacks lack the invisibility and robustness required for real-world applications, especially when it comes to resisting image compression techniques, such as JPEG and WEBP. To address these issues, in this paper, we propose a Backdoor Attack Method based on Trigger Generation (BATG). Specifically, a deep convolutional generative network is utilized as the trigger generation model to generate effective trigger images and an Invertible Neural Network (INN) is utilized as the trigger injection model to embed the generated trigger images into clean images to create poisoned images. Furthermore, a noise layer is used to simulate image compression attacks for adversarial training, enhancing the robustness against real-world image compression. Comprehensive experiments on benchmark datasets demonstrate the effectiveness, invisibility, and robustness of the proposed BATG.
doi_str_mv 10.3390/electronics13245031
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_3149604721</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A821763398</galeid><sourcerecordid>A821763398</sourcerecordid><originalsourceid>FETCH-LOGICAL-c241t-94370149c1d9cd9259fc514131916d85f19ba6db36d0d5e14a27489e0c657ddb3</originalsourceid><addsrcrecordid>eNptULtOAzEQtBBIRCFfQHMS9QWv7Xs43SWCgBREE-qTY-8Fh8QOtlPw9xiFgoLdYkejmVlpCLkFOuVc0nvco07BO6sjcCYqyuGCjBhtZCmZZJd_8DWZxLijeSTwltMRmc279XJWdMVc6Q_jfSi6lDIsXjC9e5PpiKbwrlgHu91iKJboMKhkvbshV4PaR5z83jF5e3xYL57K1evyedGtSs0EpFIK3lAQUoOR2khWyUFXIICDhNq01QByo2qz4bWhpkIQijWilUh1XTUm82Nyd849Bv95wpj6nT8Fl1_2POfWVDQMsmp6Vm3VHnvrBp-C0nkNHqz2Dgeb-a5l0NS5tjYb-Nmgg48x4NAfgz2o8NUD7X-K7f8pln8DhZRrRQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3149604721</pqid></control><display><type>article</type><title>BATG: A Backdoor Attack Method Based on Trigger Generation</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Tang, Weixuan ; Xie, Haoke ; Rao, Yuan ; Long, Min ; Qi, Tao ; Zhou, Zhili</creator><creatorcontrib>Tang, Weixuan ; Xie, Haoke ; Rao, Yuan ; Long, Min ; Qi, Tao ; Zhou, Zhili</creatorcontrib><description>Backdoor attacks aim to implant hidden backdoors into Deep Neural Networks (DNNs) so that the victim models perform well on clean images, whereas their predictions would be maliciously changed on poisoned images. However, most existing backdoor attacks lack the invisibility and robustness required for real-world applications, especially when it comes to resisting image compression techniques, such as JPEG and WEBP. To address these issues, in this paper, we propose a Backdoor Attack Method based on Trigger Generation (BATG). Specifically, a deep convolutional generative network is utilized as the trigger generation model to generate effective trigger images and an Invertible Neural Network (INN) is utilized as the trigger injection model to embed the generated trigger images into clean images to create poisoned images. Furthermore, a noise layer is used to simulate image compression attacks for adversarial training, enhancing the robustness against real-world image compression. Comprehensive experiments on benchmark datasets demonstrate the effectiveness, invisibility, and robustness of the proposed BATG.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13245031</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Artificial neural networks ; Effectiveness ; Image compression ; Methods ; Neural networks ; Performance evaluation ; Robustness ; Visibility</subject><ispartof>Electronics (Basel), 2024-12, Vol.13 (24), p.5031</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c241t-94370149c1d9cd9259fc514131916d85f19ba6db36d0d5e14a27489e0c657ddb3</cites><orcidid>0000-0002-4082-1140 ; 0000-0002-3150-5103</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Tang, Weixuan</creatorcontrib><creatorcontrib>Xie, Haoke</creatorcontrib><creatorcontrib>Rao, Yuan</creatorcontrib><creatorcontrib>Long, Min</creatorcontrib><creatorcontrib>Qi, Tao</creatorcontrib><creatorcontrib>Zhou, Zhili</creatorcontrib><title>BATG: A Backdoor Attack Method Based on Trigger Generation</title><title>Electronics (Basel)</title><description>Backdoor attacks aim to implant hidden backdoors into Deep Neural Networks (DNNs) so that the victim models perform well on clean images, whereas their predictions would be maliciously changed on poisoned images. However, most existing backdoor attacks lack the invisibility and robustness required for real-world applications, especially when it comes to resisting image compression techniques, such as JPEG and WEBP. To address these issues, in this paper, we propose a Backdoor Attack Method based on Trigger Generation (BATG). Specifically, a deep convolutional generative network is utilized as the trigger generation model to generate effective trigger images and an Invertible Neural Network (INN) is utilized as the trigger injection model to embed the generated trigger images into clean images to create poisoned images. Furthermore, a noise layer is used to simulate image compression attacks for adversarial training, enhancing the robustness against real-world image compression. Comprehensive experiments on benchmark datasets demonstrate the effectiveness, invisibility, and robustness of the proposed BATG.</description><subject>Artificial neural networks</subject><subject>Effectiveness</subject><subject>Image compression</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Performance evaluation</subject><subject>Robustness</subject><subject>Visibility</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptULtOAzEQtBBIRCFfQHMS9QWv7Xs43SWCgBREE-qTY-8Fh8QOtlPw9xiFgoLdYkejmVlpCLkFOuVc0nvco07BO6sjcCYqyuGCjBhtZCmZZJd_8DWZxLijeSTwltMRmc279XJWdMVc6Q_jfSi6lDIsXjC9e5PpiKbwrlgHu91iKJboMKhkvbshV4PaR5z83jF5e3xYL57K1evyedGtSs0EpFIK3lAQUoOR2khWyUFXIICDhNq01QByo2qz4bWhpkIQijWilUh1XTUm82Nyd849Bv95wpj6nT8Fl1_2POfWVDQMsmp6Vm3VHnvrBp-C0nkNHqz2Dgeb-a5l0NS5tjYb-Nmgg48x4NAfgz2o8NUD7X-K7f8pln8DhZRrRQ</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Tang, Weixuan</creator><creator>Xie, Haoke</creator><creator>Rao, Yuan</creator><creator>Long, Min</creator><creator>Qi, Tao</creator><creator>Zhou, Zhili</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-4082-1140</orcidid><orcidid>https://orcid.org/0000-0002-3150-5103</orcidid></search><sort><creationdate>20241201</creationdate><title>BATG: A Backdoor Attack Method Based on Trigger Generation</title><author>Tang, Weixuan ; Xie, Haoke ; Rao, Yuan ; Long, Min ; Qi, Tao ; Zhou, Zhili</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c241t-94370149c1d9cd9259fc514131916d85f19ba6db36d0d5e14a27489e0c657ddb3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial neural networks</topic><topic>Effectiveness</topic><topic>Image compression</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Performance evaluation</topic><topic>Robustness</topic><topic>Visibility</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tang, Weixuan</creatorcontrib><creatorcontrib>Xie, Haoke</creatorcontrib><creatorcontrib>Rao, Yuan</creatorcontrib><creatorcontrib>Long, Min</creatorcontrib><creatorcontrib>Qi, Tao</creatorcontrib><creatorcontrib>Zhou, Zhili</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tang, Weixuan</au><au>Xie, Haoke</au><au>Rao, Yuan</au><au>Long, Min</au><au>Qi, Tao</au><au>Zhou, Zhili</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>BATG: A Backdoor Attack Method Based on Trigger Generation</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-12-01</date><risdate>2024</risdate><volume>13</volume><issue>24</issue><spage>5031</spage><pages>5031-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Backdoor attacks aim to implant hidden backdoors into Deep Neural Networks (DNNs) so that the victim models perform well on clean images, whereas their predictions would be maliciously changed on poisoned images. However, most existing backdoor attacks lack the invisibility and robustness required for real-world applications, especially when it comes to resisting image compression techniques, such as JPEG and WEBP. To address these issues, in this paper, we propose a Backdoor Attack Method based on Trigger Generation (BATG). Specifically, a deep convolutional generative network is utilized as the trigger generation model to generate effective trigger images and an Invertible Neural Network (INN) is utilized as the trigger injection model to embed the generated trigger images into clean images to create poisoned images. Furthermore, a noise layer is used to simulate image compression attacks for adversarial training, enhancing the robustness against real-world image compression. Comprehensive experiments on benchmark datasets demonstrate the effectiveness, invisibility, and robustness of the proposed BATG.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13245031</doi><orcidid>https://orcid.org/0000-0002-4082-1140</orcidid><orcidid>https://orcid.org/0000-0002-3150-5103</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2024-12, Vol.13 (24), p.5031
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_3149604721
source MDPI - Multidisciplinary Digital Publishing Institute; EZB-FREE-00999 freely available EZB journals
subjects Artificial neural networks
Effectiveness
Image compression
Methods
Neural networks
Performance evaluation
Robustness
Visibility
title BATG: A Backdoor Attack Method Based on Trigger Generation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T12%3A16%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=BATG:%20A%20Backdoor%20Attack%20Method%20Based%20on%20Trigger%20Generation&rft.jtitle=Electronics%20(Basel)&rft.au=Tang,%20Weixuan&rft.date=2024-12-01&rft.volume=13&rft.issue=24&rft.spage=5031&rft.pages=5031-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13245031&rft_dat=%3Cgale_proqu%3EA821763398%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3149604721&rft_id=info:pmid/&rft_galeid=A821763398&rfr_iscdi=true