Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters

Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs), and many methods are proposed to defend against adversarial attacks. However, while enhancing the robustness, the accuracy for clean examples will decline to a certain extent, implying a trade-off existed be...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on pattern analysis and machine intelligence 2024-12, Vol.46 (12), p.8870-8882
Hauptverfasser: Wei, Xingxing, Zhao, Shiji, Li, Bo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 8882
container_issue 12
container_start_page 8870
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 46
creator Wei, Xingxing
Zhao, Shiji
Li, Bo
description Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs), and many methods are proposed to defend against adversarial attacks. However, while enhancing the robustness, the accuracy for clean examples will decline to a certain extent, implying a trade-off existed between the accuracy and adversarial robustness. In this paper, to meet the trade-off problem, we theoretically explore the underlying reason for the difference of the filters' weight distribution between standard-trained and robust-trained models and then argue that this is an intrinsic property for static neural networks, thus they are difficult to fundamentally improve the accuracy and adversarial robustness at the same time. Based on this analysis, we propose a sample-wise dynamic network architecture named Adversarial Weight-Varied Network (AW-Net), which focuses on dealing with clean and adversarial examples with a "divide and rule" weight strategy. The AW-Net adaptively adjusts the network's weights based on regulation signals generated by an adversarial router, which is directly influenced by the input sample. Benefiting from the dynamic network architecture, clean and adversarial examples can be processed with different network weights, which provides the potential to enhance both accuracy and adversarial robustness. A series of experiments demonstrate that our AW-Net is architecture-friendly to handle both clean and adversarial examples and can achieve better trade-off performance than state-of-the-art robust models.
doi_str_mv 10.1109/TPAMI.2024.3411035
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_38848237</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10552117</ieee_id><sourcerecordid>3065981106</sourcerecordid><originalsourceid>FETCH-LOGICAL-c205t-40e536520c2bca562a6b6c1583a73e6cdd23939b90908ebd74d1c0b830c1b783</originalsourceid><addsrcrecordid>eNpNkF1LwzAUhoMobn78ARHJpTed-Wja9HJOp4PJZAy88KIk6ekW2dqZpJP9ezs3xasDL8_7wnkQuqKkRynJ7mav_ZdRjxEW93jcJlwcoS7NeBZxwbNj1CU0YZGUTHbQmfcfhNBYEH6KOlzKWDKedtH7FDbW22CrOQ4LwDOnCogmZYnvIXwBVLhvTOOU2WJVFXha68aHCrzHG6vwG9j5IuAH64Ozugm2rnBd4qFdBnD-Ap2Uaunh8nDP0Wz4OBs8R-PJ02jQH0eGERGimIDgiWDEMG2USJhKdGKokFylHBJTFIy3T-mMZESCLtK4oIZoyYmhOpX8HN3uZ9eu_mzAh3xlvYHlUlVQNz7nJBGZbP0kLcr2qHG19w7KfO3sSrltTkm-c5r_OM13TvOD07Z0c9hv9AqKv8qvxBa43gMWAP4tCsEoTfk3Vfh66w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3065981106</pqid></control><display><type>article</type><title>Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters</title><source>IEEE Electronic Library (IEL)</source><creator>Wei, Xingxing ; Zhao, Shiji ; Li, Bo</creator><creatorcontrib>Wei, Xingxing ; Zhao, Shiji ; Li, Bo</creatorcontrib><description>Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs), and many methods are proposed to defend against adversarial attacks. However, while enhancing the robustness, the accuracy for clean examples will decline to a certain extent, implying a trade-off existed between the accuracy and adversarial robustness. In this paper, to meet the trade-off problem, we theoretically explore the underlying reason for the difference of the filters' weight distribution between standard-trained and robust-trained models and then argue that this is an intrinsic property for static neural networks, thus they are difficult to fundamentally improve the accuracy and adversarial robustness at the same time. Based on this analysis, we propose a sample-wise dynamic network architecture named Adversarial Weight-Varied Network (AW-Net), which focuses on dealing with clean and adversarial examples with a "divide and rule" weight strategy. The AW-Net adaptively adjusts the network's weights based on regulation signals generated by an adversarial router, which is directly influenced by the input sample. Benefiting from the dynamic network architecture, clean and adversarial examples can be processed with different network weights, which provides the potential to enhance both accuracy and adversarial robustness. A series of experiments demonstrate that our AW-Net is architecture-friendly to handle both clean and adversarial examples and can achieve better trade-off performance than state-of-the-art robust models.</description><identifier>ISSN: 0162-8828</identifier><identifier>ISSN: 1939-3539</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2024.3411035</identifier><identifier>PMID: 38848237</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>accuracy-robustness trade-off ; Adversarial examples ; adversarial robustness ; dynamic network structure ; Filters ; Optimization ; Regulation ; Robustness ; Testing ; Training</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2024-12, Vol.46 (12), p.8870-8882</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c205t-40e536520c2bca562a6b6c1583a73e6cdd23939b90908ebd74d1c0b830c1b783</cites><orcidid>0000-0001-6033-6049 ; 0000-0001-5980-4861 ; 0000-0002-0778-8377</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10552117$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10552117$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38848237$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wei, Xingxing</creatorcontrib><creatorcontrib>Zhao, Shiji</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><title>Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description>Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs), and many methods are proposed to defend against adversarial attacks. However, while enhancing the robustness, the accuracy for clean examples will decline to a certain extent, implying a trade-off existed between the accuracy and adversarial robustness. In this paper, to meet the trade-off problem, we theoretically explore the underlying reason for the difference of the filters' weight distribution between standard-trained and robust-trained models and then argue that this is an intrinsic property for static neural networks, thus they are difficult to fundamentally improve the accuracy and adversarial robustness at the same time. Based on this analysis, we propose a sample-wise dynamic network architecture named Adversarial Weight-Varied Network (AW-Net), which focuses on dealing with clean and adversarial examples with a "divide and rule" weight strategy. The AW-Net adaptively adjusts the network's weights based on regulation signals generated by an adversarial router, which is directly influenced by the input sample. Benefiting from the dynamic network architecture, clean and adversarial examples can be processed with different network weights, which provides the potential to enhance both accuracy and adversarial robustness. A series of experiments demonstrate that our AW-Net is architecture-friendly to handle both clean and adversarial examples and can achieve better trade-off performance than state-of-the-art robust models.</description><subject>accuracy-robustness trade-off</subject><subject>Adversarial examples</subject><subject>adversarial robustness</subject><subject>dynamic network structure</subject><subject>Filters</subject><subject>Optimization</subject><subject>Regulation</subject><subject>Robustness</subject><subject>Testing</subject><subject>Training</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkF1LwzAUhoMobn78ARHJpTed-Wja9HJOp4PJZAy88KIk6ekW2dqZpJP9ezs3xasDL8_7wnkQuqKkRynJ7mav_ZdRjxEW93jcJlwcoS7NeBZxwbNj1CU0YZGUTHbQmfcfhNBYEH6KOlzKWDKedtH7FDbW22CrOQ4LwDOnCogmZYnvIXwBVLhvTOOU2WJVFXha68aHCrzHG6vwG9j5IuAH64Ozugm2rnBd4qFdBnD-Ap2Uaunh8nDP0Wz4OBs8R-PJ02jQH0eGERGimIDgiWDEMG2USJhKdGKokFylHBJTFIy3T-mMZESCLtK4oIZoyYmhOpX8HN3uZ9eu_mzAh3xlvYHlUlVQNz7nJBGZbP0kLcr2qHG19w7KfO3sSrltTkm-c5r_OM13TvOD07Z0c9hv9AqKv8qvxBa43gMWAP4tCsEoTfk3Vfh66w</recordid><startdate>202412</startdate><enddate>202412</enddate><creator>Wei, Xingxing</creator><creator>Zhao, Shiji</creator><creator>Li, Bo</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-6033-6049</orcidid><orcidid>https://orcid.org/0000-0001-5980-4861</orcidid><orcidid>https://orcid.org/0000-0002-0778-8377</orcidid></search><sort><creationdate>202412</creationdate><title>Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters</title><author>Wei, Xingxing ; Zhao, Shiji ; Li, Bo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c205t-40e536520c2bca562a6b6c1583a73e6cdd23939b90908ebd74d1c0b830c1b783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>accuracy-robustness trade-off</topic><topic>Adversarial examples</topic><topic>adversarial robustness</topic><topic>dynamic network structure</topic><topic>Filters</topic><topic>Optimization</topic><topic>Regulation</topic><topic>Robustness</topic><topic>Testing</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wei, Xingxing</creatorcontrib><creatorcontrib>Zhao, Shiji</creatorcontrib><creatorcontrib>Li, Bo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wei, Xingxing</au><au>Zhao, Shiji</au><au>Li, Bo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2024-12</date><risdate>2024</risdate><volume>46</volume><issue>12</issue><spage>8870</spage><epage>8882</epage><pages>8870-8882</pages><issn>0162-8828</issn><issn>1939-3539</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract>Adversarial attacks have been proven to be potential threats to Deep Neural Networks (DNNs), and many methods are proposed to defend against adversarial attacks. However, while enhancing the robustness, the accuracy for clean examples will decline to a certain extent, implying a trade-off existed between the accuracy and adversarial robustness. In this paper, to meet the trade-off problem, we theoretically explore the underlying reason for the difference of the filters' weight distribution between standard-trained and robust-trained models and then argue that this is an intrinsic property for static neural networks, thus they are difficult to fundamentally improve the accuracy and adversarial robustness at the same time. Based on this analysis, we propose a sample-wise dynamic network architecture named Adversarial Weight-Varied Network (AW-Net), which focuses on dealing with clean and adversarial examples with a "divide and rule" weight strategy. The AW-Net adaptively adjusts the network's weights based on regulation signals generated by an adversarial router, which is directly influenced by the input sample. Benefiting from the dynamic network architecture, clean and adversarial examples can be processed with different network weights, which provides the potential to enhance both accuracy and adversarial robustness. A series of experiments demonstrate that our AW-Net is architecture-friendly to handle both clean and adversarial examples and can achieve better trade-off performance than state-of-the-art robust models.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>38848237</pmid><doi>10.1109/TPAMI.2024.3411035</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-6033-6049</orcidid><orcidid>https://orcid.org/0000-0001-5980-4861</orcidid><orcidid>https://orcid.org/0000-0002-0778-8377</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2024-12, Vol.46 (12), p.8870-8882
issn 0162-8828
1939-3539
1939-3539
2160-9292
language eng
recordid cdi_pubmed_primary_38848237
source IEEE Electronic Library (IEL)
subjects accuracy-robustness trade-off
Adversarial examples
adversarial robustness
dynamic network structure
Filters
Optimization
Regulation
Robustness
Testing
Training
title Revisiting the Trade-Off Between Accuracy and Robustness via Weight Distribution of Filters
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T12%3A39%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Revisiting%20the%20Trade-Off%20Between%20Accuracy%20and%20Robustness%20via%20Weight%20Distribution%20of%20Filters&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Wei,%20Xingxing&rft.date=2024-12&rft.volume=46&rft.issue=12&rft.spage=8870&rft.epage=8882&rft.pages=8870-8882&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2024.3411035&rft_dat=%3Cproquest_RIE%3E3065981106%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3065981106&rft_id=info:pmid/38848237&rft_ieee_id=10552117&rfr_iscdi=true