Defend Against Adversarial Samples by Using Perceptual Hash

Image classifiers that based on Deep Neural Networks (DNNs) have been proved to be easily fooled by well-designed perturbations. Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers. In this paper, we propose a novel defen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers, materials & continua materials & continua, 2020, Vol.62 (3), p.1365-1386
Hauptverfasser: Liu, Changrui, Ye, Dengpan, Shang, Yueyun, Jiang, Shunzhi, Li, Shiyu, Mei, Yuan, Wang, Liqiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1386
container_issue 3
container_start_page 1365
container_title Computers, materials & continua
container_volume 62
creator Liu, Changrui
Ye, Dengpan
Shang, Yueyun
Jiang, Shunzhi
Li, Shiyu
Mei, Yuan
Wang, Liqiang
description Image classifiers that based on Deep Neural Networks (DNNs) have been proved to be easily fooled by well-designed perturbations. Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers. In this paper, we propose a novel defense method which based on perceptual hash. Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense. To verify our idea, we defended against two main attack methods (a white-box attack and a black-box attack) in different DNN-based image classifiers and show that, after using our defense method, the attack-success-rate for all DNN-based image classifiers decreases significantly. More specifically, for the white-box attack, the attack-success-rate is reduced by an average of 36.3%. For the black-box attack, the average attack-successrate of targeted attack and non-targeted attack has been reduced by 72.8% and 76.7% respectively. The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples.
doi_str_mv 10.32604/cmc.2020.07421
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2394949208</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2394949208</sourcerecordid><originalsourceid>FETCH-LOGICAL-c310t-b74c37eeea3008adaa8773bf4f6b99caa277377d01822c62d84859ecbf941113</originalsourceid><addsrcrecordid>eNpNkM1Lw0AQxRdRsFbPXgOe087ObrMJnkr9qFBQsJ6XyWZSU9ok7iZC_3tj60Hm8ObxHjPwE-JWwkRhAnrq9m6CgDABo1GeiZGc6SRGxOT8334prkLYAqhEZTAS9w9ccl1E8w1VdeiiefHNPpCvaBe9077dcYjyQ_QRqnoTvbF33Hb9kC0pfF6Li5J2gW_-dCzWT4_rxTJevT6_LOar2CkJXZwb7ZRhZlIAKRVEqTEqL3WZ5FnmiHCwxhQgU0SXYJHqdJaxy8tMSynVWNydzra--eo5dHbb9L4ePlpUmR4GIR1a01PL-SYEz6VtfbUnf7AS7BGQHQDZX0D2CEj9AIzNV7c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2394949208</pqid></control><display><type>article</type><title>Defend Against Adversarial Samples by Using Perceptual Hash</title><source>EZB-FREE-00999 freely available EZB journals</source><creator>Liu, Changrui ; Ye, Dengpan ; Shang, Yueyun ; Jiang, Shunzhi ; Li, Shiyu ; Mei, Yuan ; Wang, Liqiang</creator><creatorcontrib>Liu, Changrui ; Ye, Dengpan ; Shang, Yueyun ; Jiang, Shunzhi ; Li, Shiyu ; Mei, Yuan ; Wang, Liqiang</creatorcontrib><description>Image classifiers that based on Deep Neural Networks (DNNs) have been proved to be easily fooled by well-designed perturbations. Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers. In this paper, we propose a novel defense method which based on perceptual hash. Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense. To verify our idea, we defended against two main attack methods (a white-box attack and a black-box attack) in different DNN-based image classifiers and show that, after using our defense method, the attack-success-rate for all DNN-based image classifiers decreases significantly. More specifically, for the white-box attack, the attack-success-rate is reduced by an average of 36.3%. For the black-box attack, the average attack-successrate of targeted attack and non-targeted attack has been reduced by 72.8% and 76.7% respectively. The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples.</description><identifier>ISSN: 1546-2226</identifier><identifier>ISSN: 1546-2218</identifier><identifier>EISSN: 1546-2226</identifier><identifier>DOI: 10.32604/cmc.2020.07421</identifier><language>eng</language><publisher>Henderson: Tech Science Press</publisher><subject>Artificial neural networks ; Classifiers ; Defense</subject><ispartof>Computers, materials &amp; continua, 2020, Vol.62 (3), p.1365-1386</ispartof><rights>2020. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c310t-b74c37eeea3008adaa8773bf4f6b99caa277377d01822c62d84859ecbf941113</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,4024,27923,27924,27925</link.rule.ids></links><search><creatorcontrib>Liu, Changrui</creatorcontrib><creatorcontrib>Ye, Dengpan</creatorcontrib><creatorcontrib>Shang, Yueyun</creatorcontrib><creatorcontrib>Jiang, Shunzhi</creatorcontrib><creatorcontrib>Li, Shiyu</creatorcontrib><creatorcontrib>Mei, Yuan</creatorcontrib><creatorcontrib>Wang, Liqiang</creatorcontrib><title>Defend Against Adversarial Samples by Using Perceptual Hash</title><title>Computers, materials &amp; continua</title><description>Image classifiers that based on Deep Neural Networks (DNNs) have been proved to be easily fooled by well-designed perturbations. Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers. In this paper, we propose a novel defense method which based on perceptual hash. Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense. To verify our idea, we defended against two main attack methods (a white-box attack and a black-box attack) in different DNN-based image classifiers and show that, after using our defense method, the attack-success-rate for all DNN-based image classifiers decreases significantly. More specifically, for the white-box attack, the attack-success-rate is reduced by an average of 36.3%. For the black-box attack, the average attack-successrate of targeted attack and non-targeted attack has been reduced by 72.8% and 76.7% respectively. The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples.</description><subject>Artificial neural networks</subject><subject>Classifiers</subject><subject>Defense</subject><issn>1546-2226</issn><issn>1546-2218</issn><issn>1546-2226</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpNkM1Lw0AQxRdRsFbPXgOe087ObrMJnkr9qFBQsJ6XyWZSU9ok7iZC_3tj60Hm8ObxHjPwE-JWwkRhAnrq9m6CgDABo1GeiZGc6SRGxOT8334prkLYAqhEZTAS9w9ccl1E8w1VdeiiefHNPpCvaBe9077dcYjyQ_QRqnoTvbF33Hb9kC0pfF6Li5J2gW_-dCzWT4_rxTJevT6_LOar2CkJXZwb7ZRhZlIAKRVEqTEqL3WZ5FnmiHCwxhQgU0SXYJHqdJaxy8tMSynVWNydzra--eo5dHbb9L4ePlpUmR4GIR1a01PL-SYEz6VtfbUnf7AS7BGQHQDZX0D2CEj9AIzNV7c</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Liu, Changrui</creator><creator>Ye, Dengpan</creator><creator>Shang, Yueyun</creator><creator>Jiang, Shunzhi</creator><creator>Li, Shiyu</creator><creator>Mei, Yuan</creator><creator>Wang, Liqiang</creator><general>Tech Science Press</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>2020</creationdate><title>Defend Against Adversarial Samples by Using Perceptual Hash</title><author>Liu, Changrui ; Ye, Dengpan ; Shang, Yueyun ; Jiang, Shunzhi ; Li, Shiyu ; Mei, Yuan ; Wang, Liqiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c310t-b74c37eeea3008adaa8773bf4f6b99caa277377d01822c62d84859ecbf941113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Classifiers</topic><topic>Defense</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Changrui</creatorcontrib><creatorcontrib>Ye, Dengpan</creatorcontrib><creatorcontrib>Shang, Yueyun</creatorcontrib><creatorcontrib>Jiang, Shunzhi</creatorcontrib><creatorcontrib>Li, Shiyu</creatorcontrib><creatorcontrib>Mei, Yuan</creatorcontrib><creatorcontrib>Wang, Liqiang</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Computers, materials &amp; continua</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Changrui</au><au>Ye, Dengpan</au><au>Shang, Yueyun</au><au>Jiang, Shunzhi</au><au>Li, Shiyu</au><au>Mei, Yuan</au><au>Wang, Liqiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Defend Against Adversarial Samples by Using Perceptual Hash</atitle><jtitle>Computers, materials &amp; continua</jtitle><date>2020</date><risdate>2020</risdate><volume>62</volume><issue>3</issue><spage>1365</spage><epage>1386</epage><pages>1365-1386</pages><issn>1546-2226</issn><issn>1546-2218</issn><eissn>1546-2226</eissn><abstract>Image classifiers that based on Deep Neural Networks (DNNs) have been proved to be easily fooled by well-designed perturbations. Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers. In this paper, we propose a novel defense method which based on perceptual hash. Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense. To verify our idea, we defended against two main attack methods (a white-box attack and a black-box attack) in different DNN-based image classifiers and show that, after using our defense method, the attack-success-rate for all DNN-based image classifiers decreases significantly. More specifically, for the white-box attack, the attack-success-rate is reduced by an average of 36.3%. For the black-box attack, the average attack-successrate of targeted attack and non-targeted attack has been reduced by 72.8% and 76.7% respectively. The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples.</abstract><cop>Henderson</cop><pub>Tech Science Press</pub><doi>10.32604/cmc.2020.07421</doi><tpages>22</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1546-2226
ispartof Computers, materials & continua, 2020, Vol.62 (3), p.1365-1386
issn 1546-2226
1546-2218
1546-2226
language eng
recordid cdi_proquest_journals_2394949208
source EZB-FREE-00999 freely available EZB journals
subjects Artificial neural networks
Classifiers
Defense
title Defend Against Adversarial Samples by Using Perceptual Hash
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T19%3A11%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Defend%20Against%20Adversarial%20Samples%20by%20Using%20Perceptual%20Hash&rft.jtitle=Computers,%20materials%20&%20continua&rft.au=Liu,%20Changrui&rft.date=2020&rft.volume=62&rft.issue=3&rft.spage=1365&rft.epage=1386&rft.pages=1365-1386&rft.issn=1546-2226&rft.eissn=1546-2226&rft_id=info:doi/10.32604/cmc.2020.07421&rft_dat=%3Cproquest_cross%3E2394949208%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2394949208&rft_id=info:pmid/&rfr_iscdi=true