Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks

Model quantization is widely used to compress and accelerate deep neural networks. However, recent studies have revealed the feasibility of weaponizing model quantization via implanting quantization-conditioned backdoors (QCBs). These special backdoors stay dormant on released full-precision models...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Boheng, Cai, Yishuo, Li, Haowei, Xue, Feng, Li, Zhifeng, Li, Yiming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Li, Boheng
Cai, Yishuo
Li, Haowei
Xue, Feng
Li, Zhifeng
Li, Yiming
description Model quantization is widely used to compress and accelerate deep neural networks. However, recent studies have revealed the feasibility of weaponizing model quantization via implanting quantization-conditioned backdoors (QCBs). These special backdoors stay dormant on released full-precision models but will come into effect after standard quantization. Due to the peculiarity of QCBs, existing defenses have minor effects on reducing their threats or are even infeasible. In this paper, we conduct the first in-depth analysis of QCBs. We reveal that the activation of existing QCBs primarily stems from the nearest rounding operation and is closely related to the norms of neuron-wise truncation errors (i.e., the difference between the continuous full-precision weights and its quantized version). Motivated by these insights, we propose Error-guided Flipped Rounding with Activation Preservation (EFRAP), an effective and practical defense against QCBs. Specifically, EFRAP learns a non-nearest rounding strategy with neuron-wise error norm and layer-wise activation preservation guidance, flipping the rounding strategies of neurons crucial for backdoor effects but with minimal impact on clean accuracy. Extensive evaluations on benchmark datasets demonstrate that our EFRAP can defeat state-of-the-art QCB attacks under various settings. Code is available at https://github.com/AntigoneRandy/QuantBackdoor_EFRAP.
doi_str_mv 10.48550/arxiv.2405.12725
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_12725</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_12725</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-58ce785c73aab1612237a50f2ae65b24ec1df53d22b3dbc07d898f7e3ebeb49d3</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QIJjx7HLrpSnVJVWyj66tq-RRYmRbZ5fT9qymhmNZqRDyEXD6lZLya4gfYfPmrdM1g1XXJ4Su0ZImAsNma5jobfHeE37-AXJZbpJYEuwsJsqj2NGCi8Qxmmx_YCxhF8oIY6VjaMLe4eO3oB9dTEmuihlsvmMnHjYZTz_1xnp7-_65WO1en54Wi5WFXRKVlJbVFpaJQBM0zWcCwWSeQ7YScNbtI3zUjjOjXDGMuX0XHuFAg2adu7EjFwebw-Uw3sKb5B-hj3tcKAVfzOrUY4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks</title><source>arXiv.org</source><creator>Li, Boheng ; Cai, Yishuo ; Li, Haowei ; Xue, Feng ; Li, Zhifeng ; Li, Yiming</creator><creatorcontrib>Li, Boheng ; Cai, Yishuo ; Li, Haowei ; Xue, Feng ; Li, Zhifeng ; Li, Yiming</creatorcontrib><description>Model quantization is widely used to compress and accelerate deep neural networks. However, recent studies have revealed the feasibility of weaponizing model quantization via implanting quantization-conditioned backdoors (QCBs). These special backdoors stay dormant on released full-precision models but will come into effect after standard quantization. Due to the peculiarity of QCBs, existing defenses have minor effects on reducing their threats or are even infeasible. In this paper, we conduct the first in-depth analysis of QCBs. We reveal that the activation of existing QCBs primarily stems from the nearest rounding operation and is closely related to the norms of neuron-wise truncation errors (i.e., the difference between the continuous full-precision weights and its quantized version). Motivated by these insights, we propose Error-guided Flipped Rounding with Activation Preservation (EFRAP), an effective and practical defense against QCBs. Specifically, EFRAP learns a non-nearest rounding strategy with neuron-wise error norm and layer-wise activation preservation guidance, flipping the rounding strategies of neurons crucial for backdoor effects but with minimal impact on clean accuracy. Extensive evaluations on benchmark datasets demonstrate that our EFRAP can defeat state-of-the-art QCB attacks under various settings. Code is available at https://github.com/AntigoneRandy/QuantBackdoor_EFRAP.</description><identifier>DOI: 10.48550/arxiv.2405.12725</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Cryptography and Security</subject><creationdate>2024-05</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.12725$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.12725$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Boheng</creatorcontrib><creatorcontrib>Cai, Yishuo</creatorcontrib><creatorcontrib>Li, Haowei</creatorcontrib><creatorcontrib>Xue, Feng</creatorcontrib><creatorcontrib>Li, Zhifeng</creatorcontrib><creatorcontrib>Li, Yiming</creatorcontrib><title>Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks</title><description>Model quantization is widely used to compress and accelerate deep neural networks. However, recent studies have revealed the feasibility of weaponizing model quantization via implanting quantization-conditioned backdoors (QCBs). These special backdoors stay dormant on released full-precision models but will come into effect after standard quantization. Due to the peculiarity of QCBs, existing defenses have minor effects on reducing their threats or are even infeasible. In this paper, we conduct the first in-depth analysis of QCBs. We reveal that the activation of existing QCBs primarily stems from the nearest rounding operation and is closely related to the norms of neuron-wise truncation errors (i.e., the difference between the continuous full-precision weights and its quantized version). Motivated by these insights, we propose Error-guided Flipped Rounding with Activation Preservation (EFRAP), an effective and practical defense against QCBs. Specifically, EFRAP learns a non-nearest rounding strategy with neuron-wise error norm and layer-wise activation preservation guidance, flipping the rounding strategies of neurons crucial for backdoor effects but with minimal impact on clean accuracy. Extensive evaluations on benchmark datasets demonstrate that our EFRAP can defeat state-of-the-art QCB attacks under various settings. Code is available at https://github.com/AntigoneRandy/QuantBackdoor_EFRAP.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Cryptography and Security</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QIJjx7HLrpSnVJVWyj66tq-RRYmRbZ5fT9qymhmNZqRDyEXD6lZLya4gfYfPmrdM1g1XXJ4Su0ZImAsNma5jobfHeE37-AXJZbpJYEuwsJsqj2NGCi8Qxmmx_YCxhF8oIY6VjaMLe4eO3oB9dTEmuihlsvmMnHjYZTz_1xnp7-_65WO1en54Wi5WFXRKVlJbVFpaJQBM0zWcCwWSeQ7YScNbtI3zUjjOjXDGMuX0XHuFAg2adu7EjFwebw-Uw3sKb5B-hj3tcKAVfzOrUY4</recordid><startdate>20240521</startdate><enddate>20240521</enddate><creator>Li, Boheng</creator><creator>Cai, Yishuo</creator><creator>Li, Haowei</creator><creator>Xue, Feng</creator><creator>Li, Zhifeng</creator><creator>Li, Yiming</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240521</creationdate><title>Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks</title><author>Li, Boheng ; Cai, Yishuo ; Li, Haowei ; Xue, Feng ; Li, Zhifeng ; Li, Yiming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-58ce785c73aab1612237a50f2ae65b24ec1df53d22b3dbc07d898f7e3ebeb49d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Cryptography and Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Boheng</creatorcontrib><creatorcontrib>Cai, Yishuo</creatorcontrib><creatorcontrib>Li, Haowei</creatorcontrib><creatorcontrib>Xue, Feng</creatorcontrib><creatorcontrib>Li, Zhifeng</creatorcontrib><creatorcontrib>Li, Yiming</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Boheng</au><au>Cai, Yishuo</au><au>Li, Haowei</au><au>Xue, Feng</au><au>Li, Zhifeng</au><au>Li, Yiming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks</atitle><date>2024-05-21</date><risdate>2024</risdate><abstract>Model quantization is widely used to compress and accelerate deep neural networks. However, recent studies have revealed the feasibility of weaponizing model quantization via implanting quantization-conditioned backdoors (QCBs). These special backdoors stay dormant on released full-precision models but will come into effect after standard quantization. Due to the peculiarity of QCBs, existing defenses have minor effects on reducing their threats or are even infeasible. In this paper, we conduct the first in-depth analysis of QCBs. We reveal that the activation of existing QCBs primarily stems from the nearest rounding operation and is closely related to the norms of neuron-wise truncation errors (i.e., the difference between the continuous full-precision weights and its quantized version). Motivated by these insights, we propose Error-guided Flipped Rounding with Activation Preservation (EFRAP), an effective and practical defense against QCBs. Specifically, EFRAP learns a non-nearest rounding strategy with neuron-wise error norm and layer-wise activation preservation guidance, flipping the rounding strategies of neurons crucial for backdoor effects but with minimal impact on clean accuracy. Extensive evaluations on benchmark datasets demonstrate that our EFRAP can defeat state-of-the-art QCB attacks under various settings. Code is available at https://github.com/AntigoneRandy/QuantBackdoor_EFRAP.</abstract><doi>10.48550/arxiv.2405.12725</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.12725
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_12725
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Cryptography and Security
title Nearest is Not Dearest: Towards Practical Defense against Quantization-conditioned Backdoor Attacks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T16%3A19%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Nearest%20is%20Not%20Dearest:%20Towards%20Practical%20Defense%20against%20Quantization-conditioned%20Backdoor%20Attacks&rft.au=Li,%20Boheng&rft.date=2024-05-21&rft_id=info:doi/10.48550/arxiv.2405.12725&rft_dat=%3Carxiv_GOX%3E2405_12725%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true