Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-02
Hauptverfasser: An, Shengwei, Sheng-Yen Chou, Zhang, Kaiyuan, Xu, Qiuling, Guanhong Tao, Shen, Guangyu, Cheng, Siyuan, Ma, Shiqing, Pin-Yu, Chen, Tsung-Yi, Ho, Zhang, Xiangyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator An, Shengwei
Sheng-Yen Chou
Zhang, Kaiyuan
Xu, Qiuling
Guanhong Tao
Shen, Guangyu
Cheng, Siyuan
Ma, Shiqing
Pin-Yu, Chen
Tsung-Yi, Ho
Zhang, Xiangyu
description Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the first backdoor detection and removal framework for DMs. We evaluate our framework Elijah on hundreds of DMs of 3 types including DDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks. Extensive experiments show that our approach can have close to 100% detection accuracy and reduce the backdoor effects to close to zero without significantly sacrificing the model utility.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2897288933</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2897288933</sourcerecordid><originalsourceid>FETCH-proquest_journals_28972889333</originalsourceid><addsrcrecordid>eNqNiksKwjAUAIMgWLR3CLgu1MTa1KU_dOFCdF9im9oXa6J5iee3ggdwNTAzAxIxzmeJmDM2IjGiTtOULXKWZTwip20HWrZL2vMBRnowN7qS1b221iE9GK0qr2oKhm6gaQKCNfRoa9UhfYPsJXoH1-C__txC4ydk2MgOVfzjmEx328t6nzydfQWFvtQ2ONOnkokiZ0IUnPP_rg-C3z9C</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2897288933</pqid></control><display><type>article</type><title>Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift</title><source>Free E- Journals</source><creator>An, Shengwei ; Sheng-Yen Chou ; Zhang, Kaiyuan ; Xu, Qiuling ; Guanhong Tao ; Shen, Guangyu ; Cheng, Siyuan ; Ma, Shiqing ; Pin-Yu, Chen ; Tsung-Yi, Ho ; Zhang, Xiangyu</creator><creatorcontrib>An, Shengwei ; Sheng-Yen Chou ; Zhang, Kaiyuan ; Xu, Qiuling ; Guanhong Tao ; Shen, Guangyu ; Cheng, Siyuan ; Ma, Shiqing ; Pin-Yu, Chen ; Tsung-Yi, Ho ; Zhang, Xiangyu</creatorcontrib><description>Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the first backdoor detection and removal framework for DMs. We evaluate our framework Elijah on hundreds of DMs of 3 types including DDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks. Extensive experiments show that our approach can have close to 100% detection accuracy and reduce the backdoor effects to close to zero without significantly sacrificing the model utility.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Image quality ; Random noise</subject><ispartof>arXiv.org, 2024-02</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>An, Shengwei</creatorcontrib><creatorcontrib>Sheng-Yen Chou</creatorcontrib><creatorcontrib>Zhang, Kaiyuan</creatorcontrib><creatorcontrib>Xu, Qiuling</creatorcontrib><creatorcontrib>Guanhong Tao</creatorcontrib><creatorcontrib>Shen, Guangyu</creatorcontrib><creatorcontrib>Cheng, Siyuan</creatorcontrib><creatorcontrib>Ma, Shiqing</creatorcontrib><creatorcontrib>Pin-Yu, Chen</creatorcontrib><creatorcontrib>Tsung-Yi, Ho</creatorcontrib><creatorcontrib>Zhang, Xiangyu</creatorcontrib><title>Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift</title><title>arXiv.org</title><description>Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the first backdoor detection and removal framework for DMs. We evaluate our framework Elijah on hundreds of DMs of 3 types including DDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks. Extensive experiments show that our approach can have close to 100% detection accuracy and reduce the backdoor effects to close to zero without significantly sacrificing the model utility.</description><subject>Image quality</subject><subject>Random noise</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNiksKwjAUAIMgWLR3CLgu1MTa1KU_dOFCdF9im9oXa6J5iee3ggdwNTAzAxIxzmeJmDM2IjGiTtOULXKWZTwip20HWrZL2vMBRnowN7qS1b221iE9GK0qr2oKhm6gaQKCNfRoa9UhfYPsJXoH1-C__txC4ydk2MgOVfzjmEx328t6nzydfQWFvtQ2ONOnkokiZ0IUnPP_rg-C3z9C</recordid><startdate>20240204</startdate><enddate>20240204</enddate><creator>An, Shengwei</creator><creator>Sheng-Yen Chou</creator><creator>Zhang, Kaiyuan</creator><creator>Xu, Qiuling</creator><creator>Guanhong Tao</creator><creator>Shen, Guangyu</creator><creator>Cheng, Siyuan</creator><creator>Ma, Shiqing</creator><creator>Pin-Yu, Chen</creator><creator>Tsung-Yi, Ho</creator><creator>Zhang, Xiangyu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240204</creationdate><title>Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift</title><author>An, Shengwei ; Sheng-Yen Chou ; Zhang, Kaiyuan ; Xu, Qiuling ; Guanhong Tao ; Shen, Guangyu ; Cheng, Siyuan ; Ma, Shiqing ; Pin-Yu, Chen ; Tsung-Yi, Ho ; Zhang, Xiangyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28972889333</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Image quality</topic><topic>Random noise</topic><toplevel>online_resources</toplevel><creatorcontrib>An, Shengwei</creatorcontrib><creatorcontrib>Sheng-Yen Chou</creatorcontrib><creatorcontrib>Zhang, Kaiyuan</creatorcontrib><creatorcontrib>Xu, Qiuling</creatorcontrib><creatorcontrib>Guanhong Tao</creatorcontrib><creatorcontrib>Shen, Guangyu</creatorcontrib><creatorcontrib>Cheng, Siyuan</creatorcontrib><creatorcontrib>Ma, Shiqing</creatorcontrib><creatorcontrib>Pin-Yu, Chen</creatorcontrib><creatorcontrib>Tsung-Yi, Ho</creatorcontrib><creatorcontrib>Zhang, Xiangyu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>An, Shengwei</au><au>Sheng-Yen Chou</au><au>Zhang, Kaiyuan</au><au>Xu, Qiuling</au><au>Guanhong Tao</au><au>Shen, Guangyu</au><au>Cheng, Siyuan</au><au>Ma, Shiqing</au><au>Pin-Yu, Chen</au><au>Tsung-Yi, Ho</au><au>Zhang, Xiangyu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift</atitle><jtitle>arXiv.org</jtitle><date>2024-02-04</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Diffusion models (DM) have become state-of-the-art generative models because of their capability to generate high-quality images from noises without adversarial training. However, they are vulnerable to backdoor attacks as reported by recent studies. When a data input (e.g., some Gaussian noise) is stamped with a trigger (e.g., a white patch), the backdoored model always generates the target image (e.g., an improper photo). However, effective defense strategies to mitigate backdoors from DMs are underexplored. To bridge this gap, we propose the first backdoor detection and removal framework for DMs. We evaluate our framework Elijah on hundreds of DMs of 3 types including DDPM, NCSN and LDM, with 13 samplers against 3 existing backdoor attacks. Extensive experiments show that our approach can have close to 100% detection accuracy and reduce the backdoor effects to close to zero without significantly sacrificing the model utility.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2897288933
source Free E- Journals
subjects Image quality
Random noise
title Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T22%3A54%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Elijah:%20Eliminating%20Backdoors%20Injected%20in%20Diffusion%20Models%20via%20Distribution%20Shift&rft.jtitle=arXiv.org&rft.au=An,%20Shengwei&rft.date=2024-02-04&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2897288933%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2897288933&rft_id=info:pmid/&rfr_iscdi=true