ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training

Making deep neural networks robust to small adversarial noises has recently been sought in many applications. Adversarial training through iterative projected gradient descent (PGD) has been established as one of the mainstream ideas to achieve this goal. However, PGD is computationally demanding an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Golgooni, Zeinab, Saberi, Mehrdad, Eskandar, Masih, Rohban, Mohammad Hossein
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Golgooni, Zeinab
Saberi, Mehrdad
Eskandar, Masih
Rohban, Mohammad Hossein
description Making deep neural networks robust to small adversarial noises has recently been sought in many applications. Adversarial training through iterative projected gradient descent (PGD) has been established as one of the mainstream ideas to achieve this goal. However, PGD is computationally demanding and often prohibitive in case of large datasets and models. For this reason, single-step PGD, also known as FGSM, has recently gained interest in the field. Unfortunately, FGSM-training leads to a phenomenon called ``catastrophic overfitting," which is a sudden drop in the adversarial accuracy under the PGD attack. In this paper, we support the idea that small input gradients play a key role in this phenomenon, and hence propose to zero the input gradient elements that are small for crafting FGSM attacks. Our proposed idea, while being simple and efficient, achieves competitive adversarial accuracy on various datasets.
doi_str_mv 10.48550/arxiv.2103.15476
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2103_15476</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2103_15476</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-1ee0751923b82d20217d85e643bb4d3d13cb3f136efc1eee48448dd4540465e13</originalsourceid><addsrcrecordid>eNotj81OwzAQhH3hgAoPwAm_QIJ_1o7hVkVtQGrVQ3PqJdrETlkppJETVeXtaVNOoxnNjPQx9iJFCs4Y8YbxQudUSaFTaSCzj-xwCPFURPT8g29poiNO1B859p6vLkOH1N9sjhOOUzwN39Tw3TnElqa5Rz1fF_stX_prOGIk7HgZ76sn9tBiN4bnf12wcr0q889ksyu-8uUmQZvZRIYgMiPfla6d8koomXlnggVd1-C1l7qpdSu1DW1z7QZwAM57MCDAmiD1gr3eb2e4aoj0g_G3ukFWM6T-A1csTH4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training</title><source>arXiv.org</source><creator>Golgooni, Zeinab ; Saberi, Mehrdad ; Eskandar, Masih ; Rohban, Mohammad Hossein</creator><creatorcontrib>Golgooni, Zeinab ; Saberi, Mehrdad ; Eskandar, Masih ; Rohban, Mohammad Hossein</creatorcontrib><description>Making deep neural networks robust to small adversarial noises has recently been sought in many applications. Adversarial training through iterative projected gradient descent (PGD) has been established as one of the mainstream ideas to achieve this goal. However, PGD is computationally demanding and often prohibitive in case of large datasets and models. For this reason, single-step PGD, also known as FGSM, has recently gained interest in the field. Unfortunately, FGSM-training leads to a phenomenon called ``catastrophic overfitting," which is a sudden drop in the adversarial accuracy under the PGD attack. In this paper, we support the idea that small input gradients play a key role in this phenomenon, and hence propose to zero the input gradient elements that are small for crafting FGSM attacks. Our proposed idea, while being simple and efficient, achieves competitive adversarial accuracy on various datasets.</description><identifier>DOI: 10.48550/arxiv.2103.15476</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2021-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2103.15476$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2103.15476$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Golgooni, Zeinab</creatorcontrib><creatorcontrib>Saberi, Mehrdad</creatorcontrib><creatorcontrib>Eskandar, Masih</creatorcontrib><creatorcontrib>Rohban, Mohammad Hossein</creatorcontrib><title>ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training</title><description>Making deep neural networks robust to small adversarial noises has recently been sought in many applications. Adversarial training through iterative projected gradient descent (PGD) has been established as one of the mainstream ideas to achieve this goal. However, PGD is computationally demanding and often prohibitive in case of large datasets and models. For this reason, single-step PGD, also known as FGSM, has recently gained interest in the field. Unfortunately, FGSM-training leads to a phenomenon called ``catastrophic overfitting," which is a sudden drop in the adversarial accuracy under the PGD attack. In this paper, we support the idea that small input gradients play a key role in this phenomenon, and hence propose to zero the input gradient elements that are small for crafting FGSM attacks. Our proposed idea, while being simple and efficient, achieves competitive adversarial accuracy on various datasets.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81OwzAQhH3hgAoPwAm_QIJ_1o7hVkVtQGrVQ3PqJdrETlkppJETVeXtaVNOoxnNjPQx9iJFCs4Y8YbxQudUSaFTaSCzj-xwCPFURPT8g29poiNO1B859p6vLkOH1N9sjhOOUzwN39Tw3TnElqa5Rz1fF_stX_prOGIk7HgZ76sn9tBiN4bnf12wcr0q889ksyu-8uUmQZvZRIYgMiPfla6d8koomXlnggVd1-C1l7qpdSu1DW1z7QZwAM57MCDAmiD1gr3eb2e4aoj0g_G3ukFWM6T-A1csTH4</recordid><startdate>20210329</startdate><enddate>20210329</enddate><creator>Golgooni, Zeinab</creator><creator>Saberi, Mehrdad</creator><creator>Eskandar, Masih</creator><creator>Rohban, Mohammad Hossein</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210329</creationdate><title>ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training</title><author>Golgooni, Zeinab ; Saberi, Mehrdad ; Eskandar, Masih ; Rohban, Mohammad Hossein</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-1ee0751923b82d20217d85e643bb4d3d13cb3f136efc1eee48448dd4540465e13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Golgooni, Zeinab</creatorcontrib><creatorcontrib>Saberi, Mehrdad</creatorcontrib><creatorcontrib>Eskandar, Masih</creatorcontrib><creatorcontrib>Rohban, Mohammad Hossein</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Golgooni, Zeinab</au><au>Saberi, Mehrdad</au><au>Eskandar, Masih</au><au>Rohban, Mohammad Hossein</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training</atitle><date>2021-03-29</date><risdate>2021</risdate><abstract>Making deep neural networks robust to small adversarial noises has recently been sought in many applications. Adversarial training through iterative projected gradient descent (PGD) has been established as one of the mainstream ideas to achieve this goal. However, PGD is computationally demanding and often prohibitive in case of large datasets and models. For this reason, single-step PGD, also known as FGSM, has recently gained interest in the field. Unfortunately, FGSM-training leads to a phenomenon called ``catastrophic overfitting," which is a sudden drop in the adversarial accuracy under the PGD attack. In this paper, we support the idea that small input gradients play a key role in this phenomenon, and hence propose to zero the input gradient elements that are small for crafting FGSM attacks. Our proposed idea, while being simple and efficient, achieves competitive adversarial accuracy on various datasets.</abstract><doi>10.48550/arxiv.2103.15476</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2103.15476
ispartof
issn
language eng
recordid cdi_arxiv_primary_2103_15476
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title ZeroGrad : Mitigating and Explaining Catastrophic Overfitting in FGSM Adversarial Training
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T09%3A08%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=ZeroGrad%20:%20Mitigating%20and%20Explaining%20Catastrophic%20Overfitting%20in%20FGSM%20Adversarial%20Training&rft.au=Golgooni,%20Zeinab&rft.date=2021-03-29&rft_id=info:doi/10.48550/arxiv.2103.15476&rft_dat=%3Carxiv_GOX%3E2103_15476%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true