Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient

Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (M...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Electronics (Basel) 2024-07, Vol.13 (13), p.2464
Hauptverfasser: Hong, Dian, Chen, Deng, Zhang, Yanduo, Zhou, Huabing, Xie, Liang, Ju, Jianping, Tang, Jianyin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 13
container_start_page 2464
container_title Electronics (Basel)
container_volume 13
creator Hong, Dian
Chen, Deng
Zhang, Yanduo
Zhou, Huabing
Xie, Liang
Ju, Jianping
Tang, Jianyin
description Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and its variants constitute a class of highly effective offensive strategies, achieving near-perfect attack success rates in white-box settings. However, these methods’ use of sign activation functions severely compromises gradient information, which leads to low success rates in black-box attacks and results in large adversarial perturbations. In this paper, we introduce a novel adversarial attack algorithm, NA-FGTM. Our method employs the Tanh activation function instead of the sign which can accurately preserve gradient information. In addition, it utilizes the Adam optimization algorithm as well as the Nesterov acceleration, which is able to stabilize gradient update directions and expedite gradient convergence. Above all, the transferability of adversarial examples can be enhanced. Through integration with data augmentation techniques such as DIM, TIM, and SIM, NA-FGTM can further improve the efficacy of black-box attacks. Extensive experiments on the ImageNet dataset demonstrate that our method outperforms the state-of-the-art approaches in terms of black-box attack success rate and generates adversarial examples with smaller perturbations.
doi_str_mv 10.3390/electronics13132464
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3079023398</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3079023398</sourcerecordid><originalsourceid>FETCH-LOGICAL-c272t-3a9fd886c681a2662911165d2e991fcaf8efedb2098dfc27c2cdefe6499f510b3</originalsourceid><addsrcrecordid>eNptULtOAzEQtBBIRCFfQGOJ-sCPi2OXRxQC0gEN1NbGD3HJ5RxsB4m_x6dQULDNrkazM7uD0DUlt5wrcud6Z3IMQ2cS5ZSzWtRnaMLIQlWKKXb-Z75Es5S2pJSiXHIyQS8r7zvTuSHjxn65mCB20OMmZzA7fA_JWRwG_Bz2I2WVcreH3BUEBovbEHbw4cDidQQ7ilyhCw99crPfPkXvD6u35WPVvq6flk1bGbZgueKgvJVSGCEpMCGYopSKuWVOKeoNeOm8sxtGlLS-rBhmbEFErZSfU7LhU3Rz0j3E8Hl0KettOMahWGpeniWsJCMLi59YJoaUovP6EMv98VtTosfs9D_Z8R8hK2VY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3079023398</pqid></control><display><type>article</type><title>Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Hong, Dian ; Chen, Deng ; Zhang, Yanduo ; Zhou, Huabing ; Xie, Liang ; Ju, Jianping ; Tang, Jianyin</creator><creatorcontrib>Hong, Dian ; Chen, Deng ; Zhang, Yanduo ; Zhou, Huabing ; Xie, Liang ; Ju, Jianping ; Tang, Jianyin</creatorcontrib><description>Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and its variants constitute a class of highly effective offensive strategies, achieving near-perfect attack success rates in white-box settings. However, these methods’ use of sign activation functions severely compromises gradient information, which leads to low success rates in black-box attacks and results in large adversarial perturbations. In this paper, we introduce a novel adversarial attack algorithm, NA-FGTM. Our method employs the Tanh activation function instead of the sign which can accurately preserve gradient information. In addition, it utilizes the Adam optimization algorithm as well as the Nesterov acceleration, which is able to stabilize gradient update directions and expedite gradient convergence. Above all, the transferability of adversarial examples can be enhanced. Through integration with data augmentation techniques such as DIM, TIM, and SIM, NA-FGTM can further improve the efficacy of black-box attacks. Extensive experiments on the ImageNet dataset demonstrate that our method outperforms the state-of-the-art approaches in terms of black-box attack success rate and generates adversarial examples with smaller perturbations.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13132464</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Algorithms ; Black boxes ; Data augmentation ; Effectiveness ; Iterative methods ; Methods ; Neural networks ; Noise generation ; Optimization ; Perturbation ; Success</subject><ispartof>Electronics (Basel), 2024-07, Vol.13 (13), p.2464</ispartof><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c272t-3a9fd886c681a2662911165d2e991fcaf8efedb2098dfc27c2cdefe6499f510b3</cites><orcidid>0000-0001-6359-801X ; 0000-0001-5007-7303</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Hong, Dian</creatorcontrib><creatorcontrib>Chen, Deng</creatorcontrib><creatorcontrib>Zhang, Yanduo</creatorcontrib><creatorcontrib>Zhou, Huabing</creatorcontrib><creatorcontrib>Xie, Liang</creatorcontrib><creatorcontrib>Ju, Jianping</creatorcontrib><creatorcontrib>Tang, Jianyin</creatorcontrib><title>Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient</title><title>Electronics (Basel)</title><description>Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and its variants constitute a class of highly effective offensive strategies, achieving near-perfect attack success rates in white-box settings. However, these methods’ use of sign activation functions severely compromises gradient information, which leads to low success rates in black-box attacks and results in large adversarial perturbations. In this paper, we introduce a novel adversarial attack algorithm, NA-FGTM. Our method employs the Tanh activation function instead of the sign which can accurately preserve gradient information. In addition, it utilizes the Adam optimization algorithm as well as the Nesterov acceleration, which is able to stabilize gradient update directions and expedite gradient convergence. Above all, the transferability of adversarial examples can be enhanced. Through integration with data augmentation techniques such as DIM, TIM, and SIM, NA-FGTM can further improve the efficacy of black-box attacks. Extensive experiments on the ImageNet dataset demonstrate that our method outperforms the state-of-the-art approaches in terms of black-box attack success rate and generates adversarial examples with smaller perturbations.</description><subject>Algorithms</subject><subject>Black boxes</subject><subject>Data augmentation</subject><subject>Effectiveness</subject><subject>Iterative methods</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Noise generation</subject><subject>Optimization</subject><subject>Perturbation</subject><subject>Success</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNptULtOAzEQtBBIRCFfQGOJ-sCPi2OXRxQC0gEN1NbGD3HJ5RxsB4m_x6dQULDNrkazM7uD0DUlt5wrcud6Z3IMQ2cS5ZSzWtRnaMLIQlWKKXb-Z75Es5S2pJSiXHIyQS8r7zvTuSHjxn65mCB20OMmZzA7fA_JWRwG_Bz2I2WVcreH3BUEBovbEHbw4cDidQQ7ilyhCw99crPfPkXvD6u35WPVvq6flk1bGbZgueKgvJVSGCEpMCGYopSKuWVOKeoNeOm8sxtGlLS-rBhmbEFErZSfU7LhU3Rz0j3E8Hl0KettOMahWGpeniWsJCMLi59YJoaUovP6EMv98VtTosfs9D_Z8R8hK2VY</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Hong, Dian</creator><creator>Chen, Deng</creator><creator>Zhang, Yanduo</creator><creator>Zhou, Huabing</creator><creator>Xie, Liang</creator><creator>Ju, Jianping</creator><creator>Tang, Jianyin</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PHGZM</scope><scope>PHGZT</scope><scope>PIMPY</scope><scope>PKEHL</scope><scope>PQEST</scope><scope>PQGLB</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0001-6359-801X</orcidid><orcidid>https://orcid.org/0000-0001-5007-7303</orcidid></search><sort><creationdate>20240701</creationdate><title>Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient</title><author>Hong, Dian ; Chen, Deng ; Zhang, Yanduo ; Zhou, Huabing ; Xie, Liang ; Ju, Jianping ; Tang, Jianyin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c272t-3a9fd886c681a2662911165d2e991fcaf8efedb2098dfc27c2cdefe6499f510b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Black boxes</topic><topic>Data augmentation</topic><topic>Effectiveness</topic><topic>Iterative methods</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Noise generation</topic><topic>Optimization</topic><topic>Perturbation</topic><topic>Success</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hong, Dian</creatorcontrib><creatorcontrib>Chen, Deng</creatorcontrib><creatorcontrib>Zhang, Yanduo</creatorcontrib><creatorcontrib>Zhou, Huabing</creatorcontrib><creatorcontrib>Xie, Liang</creatorcontrib><creatorcontrib>Ju, Jianping</creatorcontrib><creatorcontrib>Tang, Jianyin</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central (New)</collection><collection>ProQuest One Academic (New)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Middle East (New)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Applied &amp; Life Sciences</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hong, Dian</au><au>Chen, Deng</au><au>Zhang, Yanduo</au><au>Zhou, Huabing</au><au>Xie, Liang</au><au>Ju, Jianping</au><au>Tang, Jianyin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-07-01</date><risdate>2024</risdate><volume>13</volume><issue>13</issue><spage>2464</spage><pages>2464-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithms, momentum iterative fast gradient sign Method (MI-FGSM) and its variants constitute a class of highly effective offensive strategies, achieving near-perfect attack success rates in white-box settings. However, these methods’ use of sign activation functions severely compromises gradient information, which leads to low success rates in black-box attacks and results in large adversarial perturbations. In this paper, we introduce a novel adversarial attack algorithm, NA-FGTM. Our method employs the Tanh activation function instead of the sign which can accurately preserve gradient information. In addition, it utilizes the Adam optimization algorithm as well as the Nesterov acceleration, which is able to stabilize gradient update directions and expedite gradient convergence. Above all, the transferability of adversarial examples can be enhanced. Through integration with data augmentation techniques such as DIM, TIM, and SIM, NA-FGTM can further improve the efficacy of black-box attacks. Extensive experiments on the ImageNet dataset demonstrate that our method outperforms the state-of-the-art approaches in terms of black-box attack success rate and generates adversarial examples with smaller perturbations.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13132464</doi><orcidid>https://orcid.org/0000-0001-6359-801X</orcidid><orcidid>https://orcid.org/0000-0001-5007-7303</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2024-07, Vol.13 (13), p.2464
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_3079023398
source MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Algorithms
Black boxes
Data augmentation
Effectiveness
Iterative methods
Methods
Neural networks
Noise generation
Optimization
Perturbation
Success
title Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-21T19%3A49%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Efficient%20Adversarial%20Attack%20Based%20on%20Moment%20Estimation%20and%20Lookahead%20Gradient&rft.jtitle=Electronics%20(Basel)&rft.au=Hong,%20Dian&rft.date=2024-07-01&rft.volume=13&rft.issue=13&rft.spage=2464&rft.pages=2464-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13132464&rft_dat=%3Cproquest_cross%3E3079023398%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3079023398&rft_id=info:pmid/&rfr_iscdi=true