Architectural Resilience to Foreground-and-Background Adversarial Noise

Adversarial attacks in the form of imperceptible perturbations of normal images have been extensively studied, and for every new defense methodology created, multiple adversarial attacks are found to counteract it. In particular, a popular style of attack, exemplified in recent years by DeepFool and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Cheng, Carl, Hu, Evan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Cheng, Carl
Hu, Evan
description Adversarial attacks in the form of imperceptible perturbations of normal images have been extensively studied, and for every new defense methodology created, multiple adversarial attacks are found to counteract it. In particular, a popular style of attack, exemplified in recent years by DeepFool and Carlini-Wagner, relies solely on white-box scenarios in which full access to the predictive model and its weights are required. In this work, we instead propose distinct model-agnostic benchmark perturbations of images in order to investigate the resilience and robustness of different network architectures. Results empirically determine that increasing depth within most types of Convolutional Neural Networks typically improves model resilience towards general attacks, with improvement steadily decreasing as the model becomes deeper. Additionally, we find that a notable difference in adversarial robustness exists between residual architectures with skip connections and non-residual architectures of similar complexity. Our findings provide direction for future understanding of residual connections and depth on network robustness.
doi_str_mv 10.48550/arxiv.2003.10045
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2003_10045</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2003_10045</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-71c1a6190af269c7c1a1a24545b4bcc237415e62fbcc732a4568970ee96828293</originalsourceid><addsrcrecordid>eNotj81qwkAUhWfjoqgP0JV5gaTzP5llKlUL0oK4D9fxRgejKTdR9O2N2sXh8C3OgY-xd8EznRvDP4Cu8ZJJzlUmONfmjc0LCvvYYejOBHWywjbWEU8Bk65JZg3hjprzaZtCn08IhxcmxfaC1ALFfvPTxBZHbFBB3eL4v4dsPftaTxfp8nf-PS2WKVhnUieCACs8h0paH1xPAqQ22mz0JgSpnBYGrax6cEqCNjb3jiN6m8tcejVkk9ft06T8o3gEupUPo_JppO5jZEXT</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Architectural Resilience to Foreground-and-Background Adversarial Noise</title><source>arXiv.org</source><creator>Cheng, Carl ; Hu, Evan</creator><creatorcontrib>Cheng, Carl ; Hu, Evan</creatorcontrib><description>Adversarial attacks in the form of imperceptible perturbations of normal images have been extensively studied, and for every new defense methodology created, multiple adversarial attacks are found to counteract it. In particular, a popular style of attack, exemplified in recent years by DeepFool and Carlini-Wagner, relies solely on white-box scenarios in which full access to the predictive model and its weights are required. In this work, we instead propose distinct model-agnostic benchmark perturbations of images in order to investigate the resilience and robustness of different network architectures. Results empirically determine that increasing depth within most types of Convolutional Neural Networks typically improves model resilience towards general attacks, with improvement steadily decreasing as the model becomes deeper. Additionally, we find that a notable difference in adversarial robustness exists between residual architectures with skip connections and non-residual architectures of similar complexity. Our findings provide direction for future understanding of residual connections and depth on network robustness.</description><identifier>DOI: 10.48550/arxiv.2003.10045</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2020-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2003.10045$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2003.10045$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Cheng, Carl</creatorcontrib><creatorcontrib>Hu, Evan</creatorcontrib><title>Architectural Resilience to Foreground-and-Background Adversarial Noise</title><description>Adversarial attacks in the form of imperceptible perturbations of normal images have been extensively studied, and for every new defense methodology created, multiple adversarial attacks are found to counteract it. In particular, a popular style of attack, exemplified in recent years by DeepFool and Carlini-Wagner, relies solely on white-box scenarios in which full access to the predictive model and its weights are required. In this work, we instead propose distinct model-agnostic benchmark perturbations of images in order to investigate the resilience and robustness of different network architectures. Results empirically determine that increasing depth within most types of Convolutional Neural Networks typically improves model resilience towards general attacks, with improvement steadily decreasing as the model becomes deeper. Additionally, we find that a notable difference in adversarial robustness exists between residual architectures with skip connections and non-residual architectures of similar complexity. Our findings provide direction for future understanding of residual connections and depth on network robustness.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81qwkAUhWfjoqgP0JV5gaTzP5llKlUL0oK4D9fxRgejKTdR9O2N2sXh8C3OgY-xd8EznRvDP4Cu8ZJJzlUmONfmjc0LCvvYYejOBHWywjbWEU8Bk65JZg3hjprzaZtCn08IhxcmxfaC1ALFfvPTxBZHbFBB3eL4v4dsPftaTxfp8nf-PS2WKVhnUieCACs8h0paH1xPAqQ22mz0JgSpnBYGrax6cEqCNjb3jiN6m8tcejVkk9ft06T8o3gEupUPo_JppO5jZEXT</recordid><startdate>20200322</startdate><enddate>20200322</enddate><creator>Cheng, Carl</creator><creator>Hu, Evan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200322</creationdate><title>Architectural Resilience to Foreground-and-Background Adversarial Noise</title><author>Cheng, Carl ; Hu, Evan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-71c1a6190af269c7c1a1a24545b4bcc237415e62fbcc732a4568970ee96828293</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Cheng, Carl</creatorcontrib><creatorcontrib>Hu, Evan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cheng, Carl</au><au>Hu, Evan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Architectural Resilience to Foreground-and-Background Adversarial Noise</atitle><date>2020-03-22</date><risdate>2020</risdate><abstract>Adversarial attacks in the form of imperceptible perturbations of normal images have been extensively studied, and for every new defense methodology created, multiple adversarial attacks are found to counteract it. In particular, a popular style of attack, exemplified in recent years by DeepFool and Carlini-Wagner, relies solely on white-box scenarios in which full access to the predictive model and its weights are required. In this work, we instead propose distinct model-agnostic benchmark perturbations of images in order to investigate the resilience and robustness of different network architectures. Results empirically determine that increasing depth within most types of Convolutional Neural Networks typically improves model resilience towards general attacks, with improvement steadily decreasing as the model becomes deeper. Additionally, we find that a notable difference in adversarial robustness exists between residual architectures with skip connections and non-residual architectures of similar complexity. Our findings provide direction for future understanding of residual connections and depth on network robustness.</abstract><doi>10.48550/arxiv.2003.10045</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2003.10045
ispartof
issn
language eng
recordid cdi_arxiv_primary_2003_10045
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Architectural Resilience to Foreground-and-Background Adversarial Noise
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T17%3A05%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Architectural%20Resilience%20to%20Foreground-and-Background%20Adversarial%20Noise&rft.au=Cheng,%20Carl&rft.date=2020-03-22&rft_id=info:doi/10.48550/arxiv.2003.10045&rft_dat=%3Carxiv_GOX%3E2003_10045%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true