(Certified!!) Adversarial Robustness for Free

In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. 2020 by combining a pretrained denoising diffusi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Carlini, Nicholas, Tramer, Florian, Dvijotham, Krishnamurthy Dj, Rice, Leslie, Sun, Mingjie, Kolter, J. Zico
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Carlini, Nicholas
Tramer, Florian
Dvijotham, Krishnamurthy Dj
Rice, Leslie
Sun, Mingjie
Kolter, J. Zico
description In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. 2020 by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within an 2-norm of 0.5, an improvement of 14 percentage points over the prior certified SoTA using any approach, or an improvement of 30 percentage points over denoised smoothing. We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.
doi_str_mv 10.48550/arxiv.2206.10550
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2206_10550</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2206_10550</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-53a64d42f3c65679c2c26f2a1a154971a4187d6569c6ff21226911749b76e6b73</originalsourceid><addsrcrecordid>eNotzjFrwzAQBWAtGUrSH9CpztYMdnVn6RSNxsRpIFAo2c3ZlkCQNkFKTfvv46aZHrwHj0-IJ5CFWmstXzn-hLFAlFSAnIoHkb_ULl6CD25YLldZNYwuJo6Bj9nHqftOly-XUuZPMWuicwsx83xM7vGec3FoNof6Ld-_b3d1tc-ZjMx1yaQGhb7sSZOxPfZIHhkYtLIGWMHaDNNke_IeAZEsgFG2M-SoM-VcPP_f3rztOYZPjr_tn7u9ucsrVbU6cA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>(Certified!!) Adversarial Robustness for Free</title><source>arXiv.org</source><creator>Carlini, Nicholas ; Tramer, Florian ; Dvijotham, Krishnamurthy Dj ; Rice, Leslie ; Sun, Mingjie ; Kolter, J. Zico</creator><creatorcontrib>Carlini, Nicholas ; Tramer, Florian ; Dvijotham, Krishnamurthy Dj ; Rice, Leslie ; Sun, Mingjie ; Kolter, J. Zico</creatorcontrib><description>In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. 2020 by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within an 2-norm of 0.5, an improvement of 14 percentage points over the prior certified SoTA using any approach, or an improvement of 30 percentage points over denoised smoothing. We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.</description><identifier>DOI: 10.48550/arxiv.2206.10550</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2022-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2206.10550$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2206.10550$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Carlini, Nicholas</creatorcontrib><creatorcontrib>Tramer, Florian</creatorcontrib><creatorcontrib>Dvijotham, Krishnamurthy Dj</creatorcontrib><creatorcontrib>Rice, Leslie</creatorcontrib><creatorcontrib>Sun, Mingjie</creatorcontrib><creatorcontrib>Kolter, J. Zico</creatorcontrib><title>(Certified!!) Adversarial Robustness for Free</title><description>In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. 2020 by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within an 2-norm of 0.5, an improvement of 14 percentage points over the prior certified SoTA using any approach, or an improvement of 30 percentage points over denoised smoothing. We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzjFrwzAQBWAtGUrSH9CpztYMdnVn6RSNxsRpIFAo2c3ZlkCQNkFKTfvv46aZHrwHj0-IJ5CFWmstXzn-hLFAlFSAnIoHkb_ULl6CD25YLldZNYwuJo6Bj9nHqftOly-XUuZPMWuicwsx83xM7vGec3FoNof6Ld-_b3d1tc-ZjMx1yaQGhb7sSZOxPfZIHhkYtLIGWMHaDNNke_IeAZEsgFG2M-SoM-VcPP_f3rztOYZPjr_tn7u9ucsrVbU6cA</recordid><startdate>20220621</startdate><enddate>20220621</enddate><creator>Carlini, Nicholas</creator><creator>Tramer, Florian</creator><creator>Dvijotham, Krishnamurthy Dj</creator><creator>Rice, Leslie</creator><creator>Sun, Mingjie</creator><creator>Kolter, J. Zico</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220621</creationdate><title>(Certified!!) Adversarial Robustness for Free</title><author>Carlini, Nicholas ; Tramer, Florian ; Dvijotham, Krishnamurthy Dj ; Rice, Leslie ; Sun, Mingjie ; Kolter, J. Zico</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-53a64d42f3c65679c2c26f2a1a154971a4187d6569c6ff21226911749b76e6b73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Carlini, Nicholas</creatorcontrib><creatorcontrib>Tramer, Florian</creatorcontrib><creatorcontrib>Dvijotham, Krishnamurthy Dj</creatorcontrib><creatorcontrib>Rice, Leslie</creatorcontrib><creatorcontrib>Sun, Mingjie</creatorcontrib><creatorcontrib>Kolter, J. Zico</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Carlini, Nicholas</au><au>Tramer, Florian</au><au>Dvijotham, Krishnamurthy Dj</au><au>Rice, Leslie</au><au>Sun, Mingjie</au><au>Kolter, J. Zico</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>(Certified!!) Adversarial Robustness for Free</atitle><date>2022-06-21</date><risdate>2022</risdate><abstract>In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models. To do so, we instantiate the denoised smoothing approach of Salman et al. 2020 by combining a pretrained denoising diffusion probabilistic model and a standard high-accuracy classifier. This allows us to certify 71% accuracy on ImageNet under adversarial perturbations constrained to be within an 2-norm of 0.5, an improvement of 14 percentage points over the prior certified SoTA using any approach, or an improvement of 30 percentage points over denoised smoothing. We obtain these results using only pretrained diffusion models and image classifiers, without requiring any fine tuning or retraining of model parameters.</abstract><doi>10.48550/arxiv.2206.10550</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2206.10550
ispartof
issn
language eng
recordid cdi_arxiv_primary_2206_10550
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title (Certified!!) Adversarial Robustness for Free
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T15%3A07%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=(Certified!!)%20Adversarial%20Robustness%20for%20Free&rft.au=Carlini,%20Nicholas&rft.date=2022-06-21&rft_id=info:doi/10.48550/arxiv.2206.10550&rft_dat=%3Carxiv_GOX%3E2206_10550%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true