Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples
Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance. However, unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-10 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Xu, Nuo Mahmood, Kaleel Fang, Haowen Rathbun, Ethan Ding, Caiwen Wen, Wujie |
description | Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance. However, unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively underdeveloped. In this work, we focus on advancing the adversarial attack side of SNNs and make three major contributions. First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs. Second, using the best surrogate gradient technique, we analyze the transferability of adversarial attacks on SNNs and other state-of-the-art architectures like Vision Transformers (ViTs) and Big Transfer Convolutional Neural Networks (CNNs). We demonstrate that the adversarial examples created by non-SNN architectures are not misclassified often by SNNs. Third, due to the lack of an ubiquitous white-box attack that is effective across both the SNN and CNN/ViT domains, we develop a new white-box attack, the Auto Self-Attention Gradient Attack (Auto-SAGA). Our novel attack generates adversarial examples capable of fooling both SNN and non-SNN models simultaneously. Auto-SAGA is as much as \(91.1\%\) more effective on SNN/ViT model ensembles and provides a \(3\times\) boost in attack effectiveness on adversarially trained SNN ensembles compared to conventional white-box attacks like Auto-PGD. Our experiments and analyses are broad and rigorous covering three datasets (CIFAR-10, CIFAR-100 and ImageNet), five different white-box attacks and nineteen classifier models (seven for each CIFAR dataset and five models for ImageNet). |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2712094606</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2712094606</sourcerecordid><originalsourceid>FETCH-proquest_journals_27120946063</originalsourceid><addsrcrecordid>eNqNi8sKwjAURIMgWNR_CLgupIm26q5IxZUudF-ivdW0Nak3iY-_t4of4Go4M2d6JOBCROF8yvmAjK2tGGM8TvhsJgJSp87JU630mboL0H2raljSnf7SAaW2JaA8qka5F5W6oHs4efyAKb_257kFj7Lpwj0M1pY6Q9PiDmglqq7PnvLaNmBHpF_KxsL4l0MyWWeH1SZs0dw8WJdXxqPuppwnEWeLacxi8Z_1BhFrSME</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2712094606</pqid></control><display><type>article</type><title>Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples</title><source>Free E- Journals</source><creator>Xu, Nuo ; Mahmood, Kaleel ; Fang, Haowen ; Rathbun, Ethan ; Ding, Caiwen ; Wen, Wujie</creator><creatorcontrib>Xu, Nuo ; Mahmood, Kaleel ; Fang, Haowen ; Rathbun, Ethan ; Ding, Caiwen ; Wen, Wujie</creatorcontrib><description>Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance. However, unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively underdeveloped. In this work, we focus on advancing the adversarial attack side of SNNs and make three major contributions. First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs. Second, using the best surrogate gradient technique, we analyze the transferability of adversarial attacks on SNNs and other state-of-the-art architectures like Vision Transformers (ViTs) and Big Transfer Convolutional Neural Networks (CNNs). We demonstrate that the adversarial examples created by non-SNN architectures are not misclassified often by SNNs. Third, due to the lack of an ubiquitous white-box attack that is effective across both the SNN and CNN/ViT domains, we develop a new white-box attack, the Auto Self-Attention Gradient Attack (Auto-SAGA). Our novel attack generates adversarial examples capable of fooling both SNN and non-SNN models simultaneously. Auto-SAGA is as much as \(91.1\%\) more effective on SNN/ViT model ensembles and provides a \(3\times\) boost in attack effectiveness on adversarially trained SNN ensembles compared to conventional white-box attacks like Auto-PGD. Our experiments and analyses are broad and rigorous covering three datasets (CIFAR-10, CIFAR-100 and ImageNet), five different white-box attacks and nineteen classifier models (seven for each CIFAR dataset and five models for ImageNet).</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Deep learning ; Experimentation ; Machine learning ; Neural networks ; Security ; Spiking</subject><ispartof>arXiv.org, 2023-10</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Xu, Nuo</creatorcontrib><creatorcontrib>Mahmood, Kaleel</creatorcontrib><creatorcontrib>Fang, Haowen</creatorcontrib><creatorcontrib>Rathbun, Ethan</creatorcontrib><creatorcontrib>Ding, Caiwen</creatorcontrib><creatorcontrib>Wen, Wujie</creatorcontrib><title>Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples</title><title>arXiv.org</title><description>Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance. However, unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively underdeveloped. In this work, we focus on advancing the adversarial attack side of SNNs and make three major contributions. First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs. Second, using the best surrogate gradient technique, we analyze the transferability of adversarial attacks on SNNs and other state-of-the-art architectures like Vision Transformers (ViTs) and Big Transfer Convolutional Neural Networks (CNNs). We demonstrate that the adversarial examples created by non-SNN architectures are not misclassified often by SNNs. Third, due to the lack of an ubiquitous white-box attack that is effective across both the SNN and CNN/ViT domains, we develop a new white-box attack, the Auto Self-Attention Gradient Attack (Auto-SAGA). Our novel attack generates adversarial examples capable of fooling both SNN and non-SNN models simultaneously. Auto-SAGA is as much as \(91.1\%\) more effective on SNN/ViT model ensembles and provides a \(3\times\) boost in attack effectiveness on adversarially trained SNN ensembles compared to conventional white-box attacks like Auto-PGD. Our experiments and analyses are broad and rigorous covering three datasets (CIFAR-10, CIFAR-100 and ImageNet), five different white-box attacks and nineteen classifier models (seven for each CIFAR dataset and five models for ImageNet).</description><subject>Deep learning</subject><subject>Experimentation</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Security</subject><subject>Spiking</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi8sKwjAURIMgWNR_CLgupIm26q5IxZUudF-ivdW0Nak3iY-_t4of4Go4M2d6JOBCROF8yvmAjK2tGGM8TvhsJgJSp87JU630mboL0H2raljSnf7SAaW2JaA8qka5F5W6oHs4efyAKb_257kFj7Lpwj0M1pY6Q9PiDmglqq7PnvLaNmBHpF_KxsL4l0MyWWeH1SZs0dw8WJdXxqPuppwnEWeLacxi8Z_1BhFrSME</recordid><startdate>20231013</startdate><enddate>20231013</enddate><creator>Xu, Nuo</creator><creator>Mahmood, Kaleel</creator><creator>Fang, Haowen</creator><creator>Rathbun, Ethan</creator><creator>Ding, Caiwen</creator><creator>Wen, Wujie</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231013</creationdate><title>Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples</title><author>Xu, Nuo ; Mahmood, Kaleel ; Fang, Haowen ; Rathbun, Ethan ; Ding, Caiwen ; Wen, Wujie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27120946063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Deep learning</topic><topic>Experimentation</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Security</topic><topic>Spiking</topic><toplevel>online_resources</toplevel><creatorcontrib>Xu, Nuo</creatorcontrib><creatorcontrib>Mahmood, Kaleel</creatorcontrib><creatorcontrib>Fang, Haowen</creatorcontrib><creatorcontrib>Rathbun, Ethan</creatorcontrib><creatorcontrib>Ding, Caiwen</creatorcontrib><creatorcontrib>Wen, Wujie</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xu, Nuo</au><au>Mahmood, Kaleel</au><au>Fang, Haowen</au><au>Rathbun, Ethan</au><au>Ding, Caiwen</au><au>Wen, Wujie</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples</atitle><jtitle>arXiv.org</jtitle><date>2023-10-13</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Spiking neural networks (SNNs) have attracted much attention for their high energy efficiency and for recent advances in their classification performance. However, unlike traditional deep learning approaches, the analysis and study of the robustness of SNNs to adversarial examples remain relatively underdeveloped. In this work, we focus on advancing the adversarial attack side of SNNs and make three major contributions. First, we show that successful white-box adversarial attacks on SNNs are highly dependent on the underlying surrogate gradient technique, even in the case of adversarially trained SNNs. Second, using the best surrogate gradient technique, we analyze the transferability of adversarial attacks on SNNs and other state-of-the-art architectures like Vision Transformers (ViTs) and Big Transfer Convolutional Neural Networks (CNNs). We demonstrate that the adversarial examples created by non-SNN architectures are not misclassified often by SNNs. Third, due to the lack of an ubiquitous white-box attack that is effective across both the SNN and CNN/ViT domains, we develop a new white-box attack, the Auto Self-Attention Gradient Attack (Auto-SAGA). Our novel attack generates adversarial examples capable of fooling both SNN and non-SNN models simultaneously. Auto-SAGA is as much as \(91.1\%\) more effective on SNN/ViT model ensembles and provides a \(3\times\) boost in attack effectiveness on adversarially trained SNN ensembles compared to conventional white-box attacks like Auto-PGD. Our experiments and analyses are broad and rigorous covering three datasets (CIFAR-10, CIFAR-100 and ImageNet), five different white-box attacks and nineteen classifier models (seven for each CIFAR dataset and five models for ImageNet).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2712094606 |
source | Free E- Journals |
subjects | Deep learning Experimentation Machine learning Neural networks Security Spiking |
title | Attacking the Spike: On the Transferability and Security of Spiking Neural Networks to Adversarial Examples |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T07%3A53%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Attacking%20the%20Spike:%20On%20the%20Transferability%20and%20Security%20of%20Spiking%20Neural%20Networks%20to%20Adversarial%20Examples&rft.jtitle=arXiv.org&rft.au=Xu,%20Nuo&rft.date=2023-10-13&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2712094606%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2712094606&rft_id=info:pmid/&rfr_iscdi=true |