Demystifying the Transferability of Adversarial Attacks in Computer Networks

Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-03
Hauptverfasser: Nowroozi, Ehsan, Mekdad, Yassine, Mohammad Hajian Berenjestanaki, Conti, Mauro, Abdeslam EL Fergougui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Nowroozi, Ehsan
Mekdad, Yassine
Mohammad Hajian Berenjestanaki
Conti, Mauro
Abdeslam EL Fergougui
description Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker. This major property is known as transferability, and makes CNNs ill-suited for security applications. In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. Furthermore, we investigate whether the transferability property issue holds in computer networks applications. In our experiments, we first consider five different attacks: the Iterative Fast Gradient Method (I-FGSM), the Jacobian-based Saliency Map (JSMA), the Limited-memory Broyden Fletcher Goldfarb Shanno BFGS (L- BFGS), the Projected Gradient Descent (PGD), and the DeepFool attack. Then, we perform these attacks against three well- known datasets: the Network-based Detection of IoT (N-BaIoT) dataset, the Domain Generating Algorithms (DGA) dataset, and the RIPE Atlas dataset. Our experimental results show clearly that the transferability happens in specific use cases for the I- FGSM, the JSMA, and the LBFGS attack. In such scenarios, the attack success rate on the target network range from 63.00% to 100%. Finally, we suggest two shielding strategies to hinder the attack transferability, by considering the Most Powerful Attacks (MPAs), and the mismatch LSTM architecture.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2581109399</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2581109399</sourcerecordid><originalsourceid>FETCH-proquest_journals_25811093993</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtOwScC21itR1LVRzEqbtEvdH0J6n3pkrfXgcfwOkM35mwQEiZRNlKiBkLieo4jsV6I9JUBuy4hW4kb_Ro7J37B_AKlSUNqC6mNX7kTvPi9gIkhUa1vPBeXRvixvLSdf3gAfkJ_NthQws21aolCH-ds-V-V5WHqEf3HID8uXYD2i-dRZolSZzLPJf_XR-qxD4w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2581109399</pqid></control><display><type>article</type><title>Demystifying the Transferability of Adversarial Attacks in Computer Networks</title><source>Free E- Journals</source><creator>Nowroozi, Ehsan ; Mekdad, Yassine ; Mohammad Hajian Berenjestanaki ; Conti, Mauro ; Abdeslam EL Fergougui</creator><creatorcontrib>Nowroozi, Ehsan ; Mekdad, Yassine ; Mohammad Hajian Berenjestanaki ; Conti, Mauro ; Abdeslam EL Fergougui</creatorcontrib><description>Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker. This major property is known as transferability, and makes CNNs ill-suited for security applications. In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. Furthermore, we investigate whether the transferability property issue holds in computer networks applications. In our experiments, we first consider five different attacks: the Iterative Fast Gradient Method (I-FGSM), the Jacobian-based Saliency Map (JSMA), the Limited-memory Broyden Fletcher Goldfarb Shanno BFGS (L- BFGS), the Projected Gradient Descent (PGD), and the DeepFool attack. Then, we perform these attacks against three well- known datasets: the Network-based Detection of IoT (N-BaIoT) dataset, the Domain Generating Algorithms (DGA) dataset, and the RIPE Atlas dataset. Our experimental results show clearly that the transferability happens in specific use cases for the I- FGSM, the JSMA, and the LBFGS attack. In such scenarios, the attack success rate on the target network range from 63.00% to 100%. Finally, we suggest two shielding strategies to hinder the attack transferability, by considering the Most Powerful Attacks (MPAs), and the mismatch LSTM architecture.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Artificial neural networks ; Autism ; Computer networks ; Datasets ; Domains ; Iterative methods ; Machine learning</subject><ispartof>arXiv.org, 2022-03</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Nowroozi, Ehsan</creatorcontrib><creatorcontrib>Mekdad, Yassine</creatorcontrib><creatorcontrib>Mohammad Hajian Berenjestanaki</creatorcontrib><creatorcontrib>Conti, Mauro</creatorcontrib><creatorcontrib>Abdeslam EL Fergougui</creatorcontrib><title>Demystifying the Transferability of Adversarial Attacks in Computer Networks</title><title>arXiv.org</title><description>Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker. This major property is known as transferability, and makes CNNs ill-suited for security applications. In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. Furthermore, we investigate whether the transferability property issue holds in computer networks applications. In our experiments, we first consider five different attacks: the Iterative Fast Gradient Method (I-FGSM), the Jacobian-based Saliency Map (JSMA), the Limited-memory Broyden Fletcher Goldfarb Shanno BFGS (L- BFGS), the Projected Gradient Descent (PGD), and the DeepFool attack. Then, we perform these attacks against three well- known datasets: the Network-based Detection of IoT (N-BaIoT) dataset, the Domain Generating Algorithms (DGA) dataset, and the RIPE Atlas dataset. Our experimental results show clearly that the transferability happens in specific use cases for the I- FGSM, the JSMA, and the LBFGS attack. In such scenarios, the attack success rate on the target network range from 63.00% to 100%. Finally, we suggest two shielding strategies to hinder the attack transferability, by considering the Most Powerful Attacks (MPAs), and the mismatch LSTM architecture.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Autism</subject><subject>Computer networks</subject><subject>Datasets</subject><subject>Domains</subject><subject>Iterative methods</subject><subject>Machine learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtOwScC21itR1LVRzEqbtEvdH0J6n3pkrfXgcfwOkM35mwQEiZRNlKiBkLieo4jsV6I9JUBuy4hW4kb_Ro7J37B_AKlSUNqC6mNX7kTvPi9gIkhUa1vPBeXRvixvLSdf3gAfkJ_NthQws21aolCH-ds-V-V5WHqEf3HID8uXYD2i-dRZolSZzLPJf_XR-qxD4w</recordid><startdate>20220331</startdate><enddate>20220331</enddate><creator>Nowroozi, Ehsan</creator><creator>Mekdad, Yassine</creator><creator>Mohammad Hajian Berenjestanaki</creator><creator>Conti, Mauro</creator><creator>Abdeslam EL Fergougui</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220331</creationdate><title>Demystifying the Transferability of Adversarial Attacks in Computer Networks</title><author>Nowroozi, Ehsan ; Mekdad, Yassine ; Mohammad Hajian Berenjestanaki ; Conti, Mauro ; Abdeslam EL Fergougui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25811093993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Autism</topic><topic>Computer networks</topic><topic>Datasets</topic><topic>Domains</topic><topic>Iterative methods</topic><topic>Machine learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Nowroozi, Ehsan</creatorcontrib><creatorcontrib>Mekdad, Yassine</creatorcontrib><creatorcontrib>Mohammad Hajian Berenjestanaki</creatorcontrib><creatorcontrib>Conti, Mauro</creatorcontrib><creatorcontrib>Abdeslam EL Fergougui</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nowroozi, Ehsan</au><au>Mekdad, Yassine</au><au>Mohammad Hajian Berenjestanaki</au><au>Conti, Mauro</au><au>Abdeslam EL Fergougui</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Demystifying the Transferability of Adversarial Attacks in Computer Networks</atitle><jtitle>arXiv.org</jtitle><date>2022-03-31</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Convolutional Neural Networks (CNNs) models are one of the most frequently used deep learning networks, and extensively used in both academia and industry. Recent studies demonstrated that adversarial attacks against such models can maintain their effectiveness even when used on models other than the one targeted by the attacker. This major property is known as transferability, and makes CNNs ill-suited for security applications. In this paper, we provide the first comprehensive study which assesses the robustness of CNN-based models for computer networks against adversarial transferability. Furthermore, we investigate whether the transferability property issue holds in computer networks applications. In our experiments, we first consider five different attacks: the Iterative Fast Gradient Method (I-FGSM), the Jacobian-based Saliency Map (JSMA), the Limited-memory Broyden Fletcher Goldfarb Shanno BFGS (L- BFGS), the Projected Gradient Descent (PGD), and the DeepFool attack. Then, we perform these attacks against three well- known datasets: the Network-based Detection of IoT (N-BaIoT) dataset, the Domain Generating Algorithms (DGA) dataset, and the RIPE Atlas dataset. Our experimental results show clearly that the transferability happens in specific use cases for the I- FGSM, the JSMA, and the LBFGS attack. In such scenarios, the attack success rate on the target network range from 63.00% to 100%. Finally, we suggest two shielding strategies to hinder the attack transferability, by considering the Most Powerful Attacks (MPAs), and the mismatch LSTM architecture.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2581109399
source Free E- Journals
subjects Algorithms
Artificial neural networks
Autism
Computer networks
Datasets
Domains
Iterative methods
Machine learning
title Demystifying the Transferability of Adversarial Attacks in Computer Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-11T15%3A45%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Demystifying%20the%20Transferability%20of%20Adversarial%20Attacks%20in%20Computer%20Networks&rft.jtitle=arXiv.org&rft.au=Nowroozi,%20Ehsan&rft.date=2022-03-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2581109399%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2581109399&rft_id=info:pmid/&rfr_iscdi=true