Adversarial image detection in deep neural networks

Deep neural networks are more and more pervading many computer vision applications and in particular image classification. Notwithstanding that, recent works have demonstrated that it is quite easy to create adversarial examples, i.e., images malevolently modified to cause deep neural networks to fa...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2019-02, Vol.78 (3), p.2815-2835
Hauptverfasser: Carrara, Fabio, Falchi, Fabrizio, Caldelli, Roberto, Amato, Giuseppe, Becarelli, Rudy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 2835
container_issue 3
container_start_page 2815
container_title Multimedia tools and applications
container_volume 78
creator Carrara, Fabio
Falchi, Fabrizio
Caldelli, Roberto
Amato, Giuseppe
Becarelli, Rudy
description Deep neural networks are more and more pervading many computer vision applications and in particular image classification. Notwithstanding that, recent works have demonstrated that it is quite easy to create adversarial examples, i.e., images malevolently modified to cause deep neural networks to fail. Such images contain changes unnoticeable to the human eye but sufficient to mislead the network. This represents a serious threat for machine learning methods. In this paper, we investigate the robustness of the representations learned by the fooled neural network, analyzing the activations of its hidden layers. Specifically, we tested scoring approaches used for kNN classification, in order to distinguish between correctly classified authentic images and adversarial examples. These scores are obtained searching only between the very same images used for training the network. The results show that hidden layers activations can be used to reveal incorrect classifications caused by adversarial attacks.
doi_str_mv 10.1007/s11042-018-5853-4
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2016179184</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2016179184</sourcerecordid><originalsourceid>FETCH-LOGICAL-c316t-ac034244d2b15c6cd2eaf67cd08a5b9e470a13d0305f44fc1146baae9be8af103</originalsourceid><addsrcrecordid>eNp1kE1LxDAQhoMouK7-AG8Fz9GZfDTtcVnUFRa86Dmk6XTpurY1aRX_vVkqePI0M8z7zsfD2DXCLQKYu4gISnDAgutCS65O2AK1kdwYgacplwVwowHP2UWMewDMtVALJlf1J4XoQusOWfvudpTVNJIf277L2i4VNGQdTSG1Oxq_-vAWL9lZ4w6Rrn7jkr0-3L-sN3z7_Pi0Xm25l5iP3HmQSihViwq1z30tyDW58TUUTlclKQMOZQ0SdKNU4xFVXjlHZUWFaxDkkt3Mc4fQf0wUR7vvp9CllVak-9GUWKikwlnlQx9joMYOIT0Svi2CPbKxMxub2NgjG3v0iNkTk7bbUfib_L_pB7vbZik</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2016179184</pqid></control><display><type>article</type><title>Adversarial image detection in deep neural networks</title><source>Springer Nature - Complete Springer Journals</source><creator>Carrara, Fabio ; Falchi, Fabrizio ; Caldelli, Roberto ; Amato, Giuseppe ; Becarelli, Rudy</creator><creatorcontrib>Carrara, Fabio ; Falchi, Fabrizio ; Caldelli, Roberto ; Amato, Giuseppe ; Becarelli, Rudy</creatorcontrib><description>Deep neural networks are more and more pervading many computer vision applications and in particular image classification. Notwithstanding that, recent works have demonstrated that it is quite easy to create adversarial examples, i.e., images malevolently modified to cause deep neural networks to fail. Such images contain changes unnoticeable to the human eye but sufficient to mislead the network. This represents a serious threat for machine learning methods. In this paper, we investigate the robustness of the representations learned by the fooled neural network, analyzing the activations of its hidden layers. Specifically, we tested scoring approaches used for kNN classification, in order to distinguish between correctly classified authentic images and adversarial examples. These scores are obtained searching only between the very same images used for training the network. The results show that hidden layers activations can be used to reveal incorrect classifications caused by adversarial attacks.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-018-5853-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial neural networks ; Computer Communication Networks ; Computer Science ; Computer vision ; Data Structures and Information Theory ; Image classification ; Image detection ; Machine learning ; Multimedia Information Systems ; Neural networks ; Special Purpose and Application-Based Systems</subject><ispartof>Multimedia tools and applications, 2019-02, Vol.78 (3), p.2815-2835</ispartof><rights>Springer Science+Business Media, LLC, part of Springer Nature 2018</rights><rights>Multimedia Tools and Applications is a copyright of Springer, (2018). All Rights Reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c316t-ac034244d2b15c6cd2eaf67cd08a5b9e470a13d0305f44fc1146baae9be8af103</citedby><cites>FETCH-LOGICAL-c316t-ac034244d2b15c6cd2eaf67cd08a5b9e470a13d0305f44fc1146baae9be8af103</cites><orcidid>0000-0001-5014-5089</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-018-5853-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-018-5853-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,778,782,27907,27908,41471,42540,51302</link.rule.ids></links><search><creatorcontrib>Carrara, Fabio</creatorcontrib><creatorcontrib>Falchi, Fabrizio</creatorcontrib><creatorcontrib>Caldelli, Roberto</creatorcontrib><creatorcontrib>Amato, Giuseppe</creatorcontrib><creatorcontrib>Becarelli, Rudy</creatorcontrib><title>Adversarial image detection in deep neural networks</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>Deep neural networks are more and more pervading many computer vision applications and in particular image classification. Notwithstanding that, recent works have demonstrated that it is quite easy to create adversarial examples, i.e., images malevolently modified to cause deep neural networks to fail. Such images contain changes unnoticeable to the human eye but sufficient to mislead the network. This represents a serious threat for machine learning methods. In this paper, we investigate the robustness of the representations learned by the fooled neural network, analyzing the activations of its hidden layers. Specifically, we tested scoring approaches used for kNN classification, in order to distinguish between correctly classified authentic images and adversarial examples. These scores are obtained searching only between the very same images used for training the network. The results show that hidden layers activations can be used to reveal incorrect classifications caused by adversarial attacks.</description><subject>Artificial neural networks</subject><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Computer vision</subject><subject>Data Structures and Information Theory</subject><subject>Image classification</subject><subject>Image detection</subject><subject>Machine learning</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Special Purpose and Application-Based Systems</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp1kE1LxDAQhoMouK7-AG8Fz9GZfDTtcVnUFRa86Dmk6XTpurY1aRX_vVkqePI0M8z7zsfD2DXCLQKYu4gISnDAgutCS65O2AK1kdwYgacplwVwowHP2UWMewDMtVALJlf1J4XoQusOWfvudpTVNJIf277L2i4VNGQdTSG1Oxq_-vAWL9lZ4w6Rrn7jkr0-3L-sN3z7_Pi0Xm25l5iP3HmQSihViwq1z30tyDW58TUUTlclKQMOZQ0SdKNU4xFVXjlHZUWFaxDkkt3Mc4fQf0wUR7vvp9CllVak-9GUWKikwlnlQx9joMYOIT0Svi2CPbKxMxub2NgjG3v0iNkTk7bbUfib_L_pB7vbZik</recordid><startdate>20190201</startdate><enddate>20190201</enddate><creator>Carrara, Fabio</creator><creator>Falchi, Fabrizio</creator><creator>Caldelli, Roberto</creator><creator>Amato, Giuseppe</creator><creator>Becarelli, Rudy</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-5014-5089</orcidid></search><sort><creationdate>20190201</creationdate><title>Adversarial image detection in deep neural networks</title><author>Carrara, Fabio ; Falchi, Fabrizio ; Caldelli, Roberto ; Amato, Giuseppe ; Becarelli, Rudy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c316t-ac034244d2b15c6cd2eaf67cd08a5b9e470a13d0305f44fc1146baae9be8af103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Artificial neural networks</topic><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Computer vision</topic><topic>Data Structures and Information Theory</topic><topic>Image classification</topic><topic>Image detection</topic><topic>Machine learning</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Special Purpose and Application-Based Systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Carrara, Fabio</creatorcontrib><creatorcontrib>Falchi, Fabrizio</creatorcontrib><creatorcontrib>Caldelli, Roberto</creatorcontrib><creatorcontrib>Amato, Giuseppe</creatorcontrib><creatorcontrib>Becarelli, Rudy</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Carrara, Fabio</au><au>Falchi, Fabrizio</au><au>Caldelli, Roberto</au><au>Amato, Giuseppe</au><au>Becarelli, Rudy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial image detection in deep neural networks</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2019-02-01</date><risdate>2019</risdate><volume>78</volume><issue>3</issue><spage>2815</spage><epage>2835</epage><pages>2815-2835</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>Deep neural networks are more and more pervading many computer vision applications and in particular image classification. Notwithstanding that, recent works have demonstrated that it is quite easy to create adversarial examples, i.e., images malevolently modified to cause deep neural networks to fail. Such images contain changes unnoticeable to the human eye but sufficient to mislead the network. This represents a serious threat for machine learning methods. In this paper, we investigate the robustness of the representations learned by the fooled neural network, analyzing the activations of its hidden layers. Specifically, we tested scoring approaches used for kNN classification, in order to distinguish between correctly classified authentic images and adversarial examples. These scores are obtained searching only between the very same images used for training the network. The results show that hidden layers activations can be used to reveal incorrect classifications caused by adversarial attacks.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-018-5853-4</doi><tpages>21</tpages><orcidid>https://orcid.org/0000-0001-5014-5089</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2019-02, Vol.78 (3), p.2815-2835
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2016179184
source Springer Nature - Complete Springer Journals
subjects Artificial neural networks
Computer Communication Networks
Computer Science
Computer vision
Data Structures and Information Theory
Image classification
Image detection
Machine learning
Multimedia Information Systems
Neural networks
Special Purpose and Application-Based Systems
title Adversarial image detection in deep neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T14%3A50%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20image%20detection%20in%20deep%20neural%20networks&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Carrara,%20Fabio&rft.date=2019-02-01&rft.volume=78&rft.issue=3&rft.spage=2815&rft.epage=2835&rft.pages=2815-2835&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-018-5853-4&rft_dat=%3Cproquest_cross%3E2016179184%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2016179184&rft_id=info:pmid/&rfr_iscdi=true