Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks

Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks’ output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2024-02, Vol.13 (3), p.592
Hauptverfasser: Smagulova, Kamilya, Bacha, Lina, Fouda, Mohammed E., Kanj, Rouwaida, Eltawil, Ahmed
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 3
container_start_page 592
container_title Electronics (Basel)
container_volume 13
creator Smagulova, Kamilya
Bacha, Lina
Fouda, Mohammed E.
Kanj, Rouwaida
Eltawil, Ahmed
description Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks’ output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms.
doi_str_mv 10.3390/electronics13030592
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2923907684</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A782089659</galeid><sourcerecordid>A782089659</sourcerecordid><originalsourceid>FETCH-LOGICAL-c361t-b49e1f0e254c6ed39ec21cca14d88eb59ea543258d91a48397715ebbaa9519803</originalsourceid><addsrcrecordid>eNptUU1PwzAMrRBITGO_gEskzh1J067JsRpfk6YhoXGu3NSZsnXJSDLQ_j1B48AB-2DLfs-W_bLsltEp55Le44AqemeNCoxTTitZXGSjgtYyl4UsLv_k19kkhC1NJhkXnI6yzZvrjiFaDIGA7cnagw0aPXRmMPFEnCZN_4k-gDcwkCZGULtAnCUPRicc2kgWe9ggmQ8QgtFGQTSpvcKjT4QVxi_nd-Emu9IwBJz8xnH2_vS4nr_ky9fnxbxZ5orPWMy7UiLTFIuqVDPsuURVMKWAlb0Q2FUSoSp5UYleMigFl3XNKuw6AFkxKSgfZ3fnuQfvPo4YYrt1R2_Tyjbdn95Vz0SZUNMzagMDtsZqFz2o5D3ujXIWtUn1phYFFXJWyUTgZ4LyLgSPuj14swd_ahltf1Ro_1GBfwOurn7g</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2923907684</pqid></control><display><type>article</type><title>Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Smagulova, Kamilya ; Bacha, Lina ; Fouda, Mohammed E. ; Kanj, Rouwaida ; Eltawil, Ahmed</creator><creatorcontrib>Smagulova, Kamilya ; Bacha, Lina ; Fouda, Mohammed E. ; Kanj, Rouwaida ; Eltawil, Ahmed</creatorcontrib><description>Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks’ output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13030592</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Computational linguistics ; Data security ; Image classification ; Image processing ; Language processing ; Methods ; Natural language interfaces ; Networks ; Neural networks ; Robustness</subject><ispartof>Electronics (Basel), 2024-02, Vol.13 (3), p.592</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c361t-b49e1f0e254c6ed39ec21cca14d88eb59ea543258d91a48397715ebbaa9519803</citedby><cites>FETCH-LOGICAL-c361t-b49e1f0e254c6ed39ec21cca14d88eb59ea543258d91a48397715ebbaa9519803</cites><orcidid>0000-0003-1849-083X ; 0000-0001-7139-3428 ; 0000-0001-6932-188X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Smagulova, Kamilya</creatorcontrib><creatorcontrib>Bacha, Lina</creatorcontrib><creatorcontrib>Fouda, Mohammed E.</creatorcontrib><creatorcontrib>Kanj, Rouwaida</creatorcontrib><creatorcontrib>Eltawil, Ahmed</creatorcontrib><title>Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks</title><title>Electronics (Basel)</title><description>Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks’ output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms.</description><subject>Computational linguistics</subject><subject>Data security</subject><subject>Image classification</subject><subject>Image processing</subject><subject>Language processing</subject><subject>Methods</subject><subject>Natural language interfaces</subject><subject>Networks</subject><subject>Neural networks</subject><subject>Robustness</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNptUU1PwzAMrRBITGO_gEskzh1J067JsRpfk6YhoXGu3NSZsnXJSDLQ_j1B48AB-2DLfs-W_bLsltEp55Le44AqemeNCoxTTitZXGSjgtYyl4UsLv_k19kkhC1NJhkXnI6yzZvrjiFaDIGA7cnagw0aPXRmMPFEnCZN_4k-gDcwkCZGULtAnCUPRicc2kgWe9ggmQ8QgtFGQTSpvcKjT4QVxi_nd-Emu9IwBJz8xnH2_vS4nr_ky9fnxbxZ5orPWMy7UiLTFIuqVDPsuURVMKWAlb0Q2FUSoSp5UYleMigFl3XNKuw6AFkxKSgfZ3fnuQfvPo4YYrt1R2_Tyjbdn95Vz0SZUNMzagMDtsZqFz2o5D3ujXIWtUn1phYFFXJWyUTgZ4LyLgSPuj14swd_ahltf1Ro_1GBfwOurn7g</recordid><startdate>20240201</startdate><enddate>20240201</enddate><creator>Smagulova, Kamilya</creator><creator>Bacha, Lina</creator><creator>Fouda, Mohammed E.</creator><creator>Kanj, Rouwaida</creator><creator>Eltawil, Ahmed</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0003-1849-083X</orcidid><orcidid>https://orcid.org/0000-0001-7139-3428</orcidid><orcidid>https://orcid.org/0000-0001-6932-188X</orcidid></search><sort><creationdate>20240201</creationdate><title>Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks</title><author>Smagulova, Kamilya ; Bacha, Lina ; Fouda, Mohammed E. ; Kanj, Rouwaida ; Eltawil, Ahmed</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c361t-b49e1f0e254c6ed39ec21cca14d88eb59ea543258d91a48397715ebbaa9519803</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computational linguistics</topic><topic>Data security</topic><topic>Image classification</topic><topic>Image processing</topic><topic>Language processing</topic><topic>Methods</topic><topic>Natural language interfaces</topic><topic>Networks</topic><topic>Neural networks</topic><topic>Robustness</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Smagulova, Kamilya</creatorcontrib><creatorcontrib>Bacha, Lina</creatorcontrib><creatorcontrib>Fouda, Mohammed E.</creatorcontrib><creatorcontrib>Kanj, Rouwaida</creatorcontrib><creatorcontrib>Eltawil, Ahmed</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Smagulova, Kamilya</au><au>Bacha, Lina</au><au>Fouda, Mohammed E.</au><au>Kanj, Rouwaida</au><au>Eltawil, Ahmed</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-02-01</date><risdate>2024</risdate><volume>13</volume><issue>3</issue><spage>592</spage><pages>592-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks’ output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13030592</doi><orcidid>https://orcid.org/0000-0003-1849-083X</orcidid><orcidid>https://orcid.org/0000-0001-7139-3428</orcidid><orcidid>https://orcid.org/0000-0001-6932-188X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2024-02, Vol.13 (3), p.592
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_2923907684
source MDPI - Multidisciplinary Digital Publishing Institute; EZB-FREE-00999 freely available EZB journals
subjects Computational linguistics
Data security
Image classification
Image processing
Language processing
Methods
Natural language interfaces
Networks
Neural networks
Robustness
title Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T15%3A24%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Robustness%20and%20Transferability%20of%20Adversarial%20Attacks%20on%20Different%20Image%20Classification%20Neural%20Networks&rft.jtitle=Electronics%20(Basel)&rft.au=Smagulova,%20Kamilya&rft.date=2024-02-01&rft.volume=13&rft.issue=3&rft.spage=592&rft.pages=592-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13030592&rft_dat=%3Cgale_proqu%3EA782089659%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2923907684&rft_id=info:pmid/&rft_galeid=A782089659&rfr_iscdi=true