Defending against Adversarial Images using Basis Functions Transformations

We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images. Specifically, we experiment with low-pass filtering, PCA, JPEG compression, low resolution wavelet approximation, and soft-th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shaham, Uri, Garritano, James, Yamada, Yutaro, Weinberger, Ethan, Cloninger, Alex, Cheng, Xiuyuan, Stanton, Kelly, Kluger, Yuval
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shaham, Uri
Garritano, James
Yamada, Yutaro
Weinberger, Ethan
Cloninger, Alex
Cheng, Xiuyuan
Stanton, Kelly
Kluger, Yuval
description We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images. Specifically, we experiment with low-pass filtering, PCA, JPEG compression, low resolution wavelet approximation, and soft-thresholding. We evaluate these defense techniques using three types of popular attacks in black, gray and white-box settings. Our results show JPEG compression tends to outperform the other tested defenses in most of the settings considered, in addition to soft-thresholding, which performs well in specific cases, and yields a more mild decrease in accuracy on benign examples. In addition, we also mathematically derive a novel white-box attack in which the adversarial perturbation is composed only of terms corresponding a to pre-determined subset of the basis functions, of which a "low frequency attack" is a special case.
doi_str_mv 10.48550/arxiv.1803.10840
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1803_10840</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1803_10840</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-b8ef4fbbdc904d25bffa08cf5547bf9dc5f54e959cb8f13960ff25881db227d53</originalsourceid><addsrcrecordid>eNotz81OwzAQBGBfekCFB-CEXyDBTryNfSyFQlElLrlH65-NLDVuZbcVvD1q4DQaaTTSx9ijFLXSAOIZ83e81lKLtpZCK3HHPl8DheRjGjmOGFM587W_hlwwRzzw3YRjKPxSboMXLLHw7SW5czymwvuMqdAxTzj3e7YgPJTw8J9L1m_f-s1Htf96323W-wpXnaisDqTIWu-MUL4BS4RCOwJQnSXjHRCoYMA4q0m2ZiWIGtBaets0nYd2yZ7-bmfMcMpxwvwz3FDDjGp_AfzTSPo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Defending against Adversarial Images using Basis Functions Transformations</title><source>arXiv.org</source><creator>Shaham, Uri ; Garritano, James ; Yamada, Yutaro ; Weinberger, Ethan ; Cloninger, Alex ; Cheng, Xiuyuan ; Stanton, Kelly ; Kluger, Yuval</creator><creatorcontrib>Shaham, Uri ; Garritano, James ; Yamada, Yutaro ; Weinberger, Ethan ; Cloninger, Alex ; Cheng, Xiuyuan ; Stanton, Kelly ; Kluger, Yuval</creatorcontrib><description>We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images. Specifically, we experiment with low-pass filtering, PCA, JPEG compression, low resolution wavelet approximation, and soft-thresholding. We evaluate these defense techniques using three types of popular attacks in black, gray and white-box settings. Our results show JPEG compression tends to outperform the other tested defenses in most of the settings considered, in addition to soft-thresholding, which performs well in specific cases, and yields a more mild decrease in accuracy on benign examples. In addition, we also mathematically derive a novel white-box attack in which the adversarial perturbation is composed only of terms corresponding a to pre-determined subset of the basis functions, of which a "low frequency attack" is a special case.</description><identifier>DOI: 10.48550/arxiv.1803.10840</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2018-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1803.10840$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1803.10840$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shaham, Uri</creatorcontrib><creatorcontrib>Garritano, James</creatorcontrib><creatorcontrib>Yamada, Yutaro</creatorcontrib><creatorcontrib>Weinberger, Ethan</creatorcontrib><creatorcontrib>Cloninger, Alex</creatorcontrib><creatorcontrib>Cheng, Xiuyuan</creatorcontrib><creatorcontrib>Stanton, Kelly</creatorcontrib><creatorcontrib>Kluger, Yuval</creatorcontrib><title>Defending against Adversarial Images using Basis Functions Transformations</title><description>We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images. Specifically, we experiment with low-pass filtering, PCA, JPEG compression, low resolution wavelet approximation, and soft-thresholding. We evaluate these defense techniques using three types of popular attacks in black, gray and white-box settings. Our results show JPEG compression tends to outperform the other tested defenses in most of the settings considered, in addition to soft-thresholding, which performs well in specific cases, and yields a more mild decrease in accuracy on benign examples. In addition, we also mathematically derive a novel white-box attack in which the adversarial perturbation is composed only of terms corresponding a to pre-determined subset of the basis functions, of which a "low frequency attack" is a special case.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81OwzAQBGBfekCFB-CEXyDBTryNfSyFQlElLrlH65-NLDVuZbcVvD1q4DQaaTTSx9ijFLXSAOIZ83e81lKLtpZCK3HHPl8DheRjGjmOGFM587W_hlwwRzzw3YRjKPxSboMXLLHw7SW5czymwvuMqdAxTzj3e7YgPJTw8J9L1m_f-s1Htf96323W-wpXnaisDqTIWu-MUL4BS4RCOwJQnSXjHRCoYMA4q0m2ZiWIGtBaets0nYd2yZ7-bmfMcMpxwvwz3FDDjGp_AfzTSPo</recordid><startdate>20180328</startdate><enddate>20180328</enddate><creator>Shaham, Uri</creator><creator>Garritano, James</creator><creator>Yamada, Yutaro</creator><creator>Weinberger, Ethan</creator><creator>Cloninger, Alex</creator><creator>Cheng, Xiuyuan</creator><creator>Stanton, Kelly</creator><creator>Kluger, Yuval</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20180328</creationdate><title>Defending against Adversarial Images using Basis Functions Transformations</title><author>Shaham, Uri ; Garritano, James ; Yamada, Yutaro ; Weinberger, Ethan ; Cloninger, Alex ; Cheng, Xiuyuan ; Stanton, Kelly ; Kluger, Yuval</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-b8ef4fbbdc904d25bffa08cf5547bf9dc5f54e959cb8f13960ff25881db227d53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Shaham, Uri</creatorcontrib><creatorcontrib>Garritano, James</creatorcontrib><creatorcontrib>Yamada, Yutaro</creatorcontrib><creatorcontrib>Weinberger, Ethan</creatorcontrib><creatorcontrib>Cloninger, Alex</creatorcontrib><creatorcontrib>Cheng, Xiuyuan</creatorcontrib><creatorcontrib>Stanton, Kelly</creatorcontrib><creatorcontrib>Kluger, Yuval</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shaham, Uri</au><au>Garritano, James</au><au>Yamada, Yutaro</au><au>Weinberger, Ethan</au><au>Cloninger, Alex</au><au>Cheng, Xiuyuan</au><au>Stanton, Kelly</au><au>Kluger, Yuval</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Defending against Adversarial Images using Basis Functions Transformations</atitle><date>2018-03-28</date><risdate>2018</risdate><abstract>We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images. Specifically, we experiment with low-pass filtering, PCA, JPEG compression, low resolution wavelet approximation, and soft-thresholding. We evaluate these defense techniques using three types of popular attacks in black, gray and white-box settings. Our results show JPEG compression tends to outperform the other tested defenses in most of the settings considered, in addition to soft-thresholding, which performs well in specific cases, and yields a more mild decrease in accuracy on benign examples. In addition, we also mathematically derive a novel white-box attack in which the adversarial perturbation is composed only of terms corresponding a to pre-determined subset of the basis functions, of which a "low frequency attack" is a special case.</abstract><doi>10.48550/arxiv.1803.10840</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1803.10840
ispartof
issn
language eng
recordid cdi_arxiv_primary_1803_10840
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Defending against Adversarial Images using Basis Functions Transformations
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T06%3A20%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Defending%20against%20Adversarial%20Images%20using%20Basis%20Functions%20Transformations&rft.au=Shaham,%20Uri&rft.date=2018-03-28&rft_id=info:doi/10.48550/arxiv.1803.10840&rft_dat=%3Carxiv_GOX%3E1803_10840%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true