Position: Towards Resilience Against Adversarial Examples

Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type such as $\ell_2$ or $\ell_{\infty}$-bounded attacks. However, the space of possible perturbations is much larger than considered by many existing defenses and is difficul...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Dai, Sihui, Xiang, Chong, Wu, Tong, Mittal, Prateek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Dai, Sihui
Xiang, Chong
Wu, Tong
Mittal, Prateek
description Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type such as $\ell_2$ or $\ell_{\infty}$-bounded attacks. However, the space of possible perturbations is much larger than considered by many existing defenses and is difficult to mathematically model, so the attacker can easily bypass the defense by using a type of attack that is not covered by the defense. In this position paper, we argue that in addition to robustness, we should also aim to develop defense algorithms that are adversarially resilient -- defense algorithms should specify a means to quickly adapt the defended model to be robust against new attacks. We provide a definition of adversarial resilience and outline considerations of designing an adversarially resilient defense. We then introduce a subproblem of adversarial resilience which we call continual adaptive robustness, in which the defender gains knowledge of the formulation of possible perturbation spaces over time and can then update their model based on this information. Additionally, we demonstrate the connection between continual adaptive robustness and previously studied problems of multiattack robustness and unforeseen attack robustness and outline open directions within these fields which can contribute to improving continual adaptive robustness and adversarial resilience.
doi_str_mv 10.48550/arxiv.2405.01349
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_01349</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_01349</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2405_013493</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1TMwNDax5GSwDMgvzizJzM-zUgjJL08sSilWCEotzszJTM1LTlVwTE_MzCsuUXBMKUstKk4sykzMUXCtSMwtyEkt5mFgTUvMKU7lhdLcDPJuriHOHrpgS-ILijJzE4sq40GWxYMtMyasAgBh5zRL</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Position: Towards Resilience Against Adversarial Examples</title><source>arXiv.org</source><creator>Dai, Sihui ; Xiang, Chong ; Wu, Tong ; Mittal, Prateek</creator><creatorcontrib>Dai, Sihui ; Xiang, Chong ; Wu, Tong ; Mittal, Prateek</creatorcontrib><description>Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type such as $\ell_2$ or $\ell_{\infty}$-bounded attacks. However, the space of possible perturbations is much larger than considered by many existing defenses and is difficult to mathematically model, so the attacker can easily bypass the defense by using a type of attack that is not covered by the defense. In this position paper, we argue that in addition to robustness, we should also aim to develop defense algorithms that are adversarially resilient -- defense algorithms should specify a means to quickly adapt the defended model to be robust against new attacks. We provide a definition of adversarial resilience and outline considerations of designing an adversarially resilient defense. We then introduce a subproblem of adversarial resilience which we call continual adaptive robustness, in which the defender gains knowledge of the formulation of possible perturbation spaces over time and can then update their model based on this information. Additionally, we demonstrate the connection between continual adaptive robustness and previously studied problems of multiattack robustness and unforeseen attack robustness and outline open directions within these fields which can contribute to improving continual adaptive robustness and adversarial resilience.</description><identifier>DOI: 10.48550/arxiv.2405.01349</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.01349$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.01349$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dai, Sihui</creatorcontrib><creatorcontrib>Xiang, Chong</creatorcontrib><creatorcontrib>Wu, Tong</creatorcontrib><creatorcontrib>Mittal, Prateek</creatorcontrib><title>Position: Towards Resilience Against Adversarial Examples</title><description>Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type such as $\ell_2$ or $\ell_{\infty}$-bounded attacks. However, the space of possible perturbations is much larger than considered by many existing defenses and is difficult to mathematically model, so the attacker can easily bypass the defense by using a type of attack that is not covered by the defense. In this position paper, we argue that in addition to robustness, we should also aim to develop defense algorithms that are adversarially resilient -- defense algorithms should specify a means to quickly adapt the defended model to be robust against new attacks. We provide a definition of adversarial resilience and outline considerations of designing an adversarially resilient defense. We then introduce a subproblem of adversarial resilience which we call continual adaptive robustness, in which the defender gains knowledge of the formulation of possible perturbation spaces over time and can then update their model based on this information. Additionally, we demonstrate the connection between continual adaptive robustness and previously studied problems of multiattack robustness and unforeseen attack robustness and outline open directions within these fields which can contribute to improving continual adaptive robustness and adversarial resilience.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjEw1TMwNDax5GSwDMgvzizJzM-zUgjJL08sSilWCEotzszJTM1LTlVwTE_MzCsuUXBMKUstKk4sykzMUXCtSMwtyEkt5mFgTUvMKU7lhdLcDPJuriHOHrpgS-ILijJzE4sq40GWxYMtMyasAgBh5zRL</recordid><startdate>20240502</startdate><enddate>20240502</enddate><creator>Dai, Sihui</creator><creator>Xiang, Chong</creator><creator>Wu, Tong</creator><creator>Mittal, Prateek</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240502</creationdate><title>Position: Towards Resilience Against Adversarial Examples</title><author>Dai, Sihui ; Xiang, Chong ; Wu, Tong ; Mittal, Prateek</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2405_013493</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Dai, Sihui</creatorcontrib><creatorcontrib>Xiang, Chong</creatorcontrib><creatorcontrib>Wu, Tong</creatorcontrib><creatorcontrib>Mittal, Prateek</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dai, Sihui</au><au>Xiang, Chong</au><au>Wu, Tong</au><au>Mittal, Prateek</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Position: Towards Resilience Against Adversarial Examples</atitle><date>2024-05-02</date><risdate>2024</risdate><abstract>Current research on defending against adversarial examples focuses primarily on achieving robustness against a single attack type such as $\ell_2$ or $\ell_{\infty}$-bounded attacks. However, the space of possible perturbations is much larger than considered by many existing defenses and is difficult to mathematically model, so the attacker can easily bypass the defense by using a type of attack that is not covered by the defense. In this position paper, we argue that in addition to robustness, we should also aim to develop defense algorithms that are adversarially resilient -- defense algorithms should specify a means to quickly adapt the defended model to be robust against new attacks. We provide a definition of adversarial resilience and outline considerations of designing an adversarially resilient defense. We then introduce a subproblem of adversarial resilience which we call continual adaptive robustness, in which the defender gains knowledge of the formulation of possible perturbation spaces over time and can then update their model based on this information. Additionally, we demonstrate the connection between continual adaptive robustness and previously studied problems of multiattack robustness and unforeseen attack robustness and outline open directions within these fields which can contribute to improving continual adaptive robustness and adversarial resilience.</abstract><doi>10.48550/arxiv.2405.01349</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.01349
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_01349
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title Position: Towards Resilience Against Adversarial Examples
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T08%3A17%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Position:%20Towards%20Resilience%20Against%20Adversarial%20Examples&rft.au=Dai,%20Sihui&rft.date=2024-05-02&rft_id=info:doi/10.48550/arxiv.2405.01349&rft_dat=%3Carxiv_GOX%3E2405_01349%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true