Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks
An intrusion detection system (IDS) is an effective tool for securing networks and a dependable technique for improving a user’s internet security. It informs the administration whenever strange conduct occurs. An IDS fundamentally depends on the classification of network packets as benign or attack...
Gespeichert in:
Veröffentlicht in: | Sustainability 2023-06, Vol.15 (12), p.9801 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 12 |
container_start_page | 9801 |
container_title | Sustainability |
container_volume | 15 |
creator | Alotaibi, Afnan Rassam, Murad A |
description | An intrusion detection system (IDS) is an effective tool for securing networks and a dependable technique for improving a user’s internet security. It informs the administration whenever strange conduct occurs. An IDS fundamentally depends on the classification of network packets as benign or attack. Moreover, IDSs can achieve better results when built with machine learning (ML)/deep learning (DL) techniques, such as convolutional neural networks (CNNs). However, there is a limitation when building a reliable IDS using ML/DL techniques, which is their vulnerability to adversarial attacks. Such attacks are crafted by attackers to compromise the ML/DL models, which affects their accuracy. Thus, this paper describes the construction of a sustainable IDS based on the CNN technique, and it presents a method for defense against adversarial attacks that enhances the IDS’s accuracy and ensures it is more reliable in performing classification. To achieve this goal, first, two IDS models with a convolutional neural network (CNN) were built to enhance the IDS accuracy. Second, seven adversarial attack scenarios were designed against the aforementioned CNN-based IDS models to test their reliability and efficiency. The experimental results show that the CNN-based IDS models achieved significant increases in the intrusion detection system accuracy of 97.51% and 95.43% compared with the scores before the adversarial scenarios were applied. Furthermore, it was revealed that the adversarial attacks caused the models’ accuracy to significantly decrease from one attack scenario to another. The Auto-PGD and BIM attacks had the strongest effect against the CNN-based IDS models, with accuracy drops of 2.92% and 3.46%, respectively. Third, this research applied the adversarial perturbation elimination with generative adversarial nets (APE_GAN++) defense method to enhance the accuracy of the CNN-based IDS models after they were affected by adversarial attacks, which was shown to increase after the adversarial attacks in an intelligible way, with accuracy scores ranging between 78.12% and 89.40%. |
doi_str_mv | 10.3390/su15129801 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2829881560</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A758354967</galeid><sourcerecordid>A758354967</sourcerecordid><originalsourceid>FETCH-LOGICAL-c368t-5270adb53e79daeddd2da0fa3c473095d21cdf9863984f270b4c5b49b113bccd3</originalsourceid><addsrcrecordid>eNpVkU1P3DAQhqMKpCLg0l9gqacihdpxnMTH7ULblVZFgvYcTezxYgjO4nH4-PcYLRJl5jAfet4ZaaYovgh-KqXm32kWSlS64-JTcVDxVpSCK773X_65OCa64dmkFFo0B8XjebiGYHzYsHSN7GqmBD7A4Eefntnk2BnitlwjxJCZ8gcQWvYH0-MUb9kqpDiTn0KmEpr0mi1HIPLOYyQGmzyLElvYh1xC9DCyRUpgbumo2HcwEh6_xcPi38_zv8vf5fri12q5WJdGNl0qVdVysIOS2GoLaK2tLHAH0tSt5FrZShjrdNdI3dUuw0Nt1FDrQQg5GGPlYfF1N3cbp_sZKfU30xxDXtlXXb5VJ1TDM3W6ozYwYu-Dm1IEk93inTdTQOdzf9GqTqpaN20WfPsgyEzCp7SBmahfXV1-ZE92rIkTUUTXb6O_g_jcC96_Pq5_f5x8ASH2i50</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2829881560</pqid></control><display><type>article</type><title>Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks</title><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Alotaibi, Afnan ; Rassam, Murad A</creator><creatorcontrib>Alotaibi, Afnan ; Rassam, Murad A</creatorcontrib><description>An intrusion detection system (IDS) is an effective tool for securing networks and a dependable technique for improving a user’s internet security. It informs the administration whenever strange conduct occurs. An IDS fundamentally depends on the classification of network packets as benign or attack. Moreover, IDSs can achieve better results when built with machine learning (ML)/deep learning (DL) techniques, such as convolutional neural networks (CNNs). However, there is a limitation when building a reliable IDS using ML/DL techniques, which is their vulnerability to adversarial attacks. Such attacks are crafted by attackers to compromise the ML/DL models, which affects their accuracy. Thus, this paper describes the construction of a sustainable IDS based on the CNN technique, and it presents a method for defense against adversarial attacks that enhances the IDS’s accuracy and ensures it is more reliable in performing classification. To achieve this goal, first, two IDS models with a convolutional neural network (CNN) were built to enhance the IDS accuracy. Second, seven adversarial attack scenarios were designed against the aforementioned CNN-based IDS models to test their reliability and efficiency. The experimental results show that the CNN-based IDS models achieved significant increases in the intrusion detection system accuracy of 97.51% and 95.43% compared with the scores before the adversarial scenarios were applied. Furthermore, it was revealed that the adversarial attacks caused the models’ accuracy to significantly decrease from one attack scenario to another. The Auto-PGD and BIM attacks had the strongest effect against the CNN-based IDS models, with accuracy drops of 2.92% and 3.46%, respectively. Third, this research applied the adversarial perturbation elimination with generative adversarial nets (APE_GAN++) defense method to enhance the accuracy of the CNN-based IDS models after they were affected by adversarial attacks, which was shown to increase after the adversarial attacks in an intelligible way, with accuracy scores ranging between 78.12% and 89.40%.</description><identifier>ISSN: 2071-1050</identifier><identifier>EISSN: 2071-1050</identifier><identifier>DOI: 10.3390/su15129801</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Accuracy ; Algorithms ; Analysis ; Classification ; Construction ; Datasets ; Deep learning ; Detectors ; Internet ; Machine learning ; Model accuracy ; Neural networks ; Packets (communication) ; Research methodology ; Researchers ; Safety and security measures ; Security software ; Sustainability ; Sustainable development</subject><ispartof>Sustainability, 2023-06, Vol.15 (12), p.9801</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c368t-5270adb53e79daeddd2da0fa3c473095d21cdf9863984f270b4c5b49b113bccd3</citedby><cites>FETCH-LOGICAL-c368t-5270adb53e79daeddd2da0fa3c473095d21cdf9863984f270b4c5b49b113bccd3</cites><orcidid>0000-0003-3558-6737</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Alotaibi, Afnan</creatorcontrib><creatorcontrib>Rassam, Murad A</creatorcontrib><title>Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks</title><title>Sustainability</title><description>An intrusion detection system (IDS) is an effective tool for securing networks and a dependable technique for improving a user’s internet security. It informs the administration whenever strange conduct occurs. An IDS fundamentally depends on the classification of network packets as benign or attack. Moreover, IDSs can achieve better results when built with machine learning (ML)/deep learning (DL) techniques, such as convolutional neural networks (CNNs). However, there is a limitation when building a reliable IDS using ML/DL techniques, which is their vulnerability to adversarial attacks. Such attacks are crafted by attackers to compromise the ML/DL models, which affects their accuracy. Thus, this paper describes the construction of a sustainable IDS based on the CNN technique, and it presents a method for defense against adversarial attacks that enhances the IDS’s accuracy and ensures it is more reliable in performing classification. To achieve this goal, first, two IDS models with a convolutional neural network (CNN) were built to enhance the IDS accuracy. Second, seven adversarial attack scenarios were designed against the aforementioned CNN-based IDS models to test their reliability and efficiency. The experimental results show that the CNN-based IDS models achieved significant increases in the intrusion detection system accuracy of 97.51% and 95.43% compared with the scores before the adversarial scenarios were applied. Furthermore, it was revealed that the adversarial attacks caused the models’ accuracy to significantly decrease from one attack scenario to another. The Auto-PGD and BIM attacks had the strongest effect against the CNN-based IDS models, with accuracy drops of 2.92% and 3.46%, respectively. Third, this research applied the adversarial perturbation elimination with generative adversarial nets (APE_GAN++) defense method to enhance the accuracy of the CNN-based IDS models after they were affected by adversarial attacks, which was shown to increase after the adversarial attacks in an intelligible way, with accuracy scores ranging between 78.12% and 89.40%.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Analysis</subject><subject>Classification</subject><subject>Construction</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Detectors</subject><subject>Internet</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Packets (communication)</subject><subject>Research methodology</subject><subject>Researchers</subject><subject>Safety and security measures</subject><subject>Security software</subject><subject>Sustainability</subject><subject>Sustainable development</subject><issn>2071-1050</issn><issn>2071-1050</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpVkU1P3DAQhqMKpCLg0l9gqacihdpxnMTH7ULblVZFgvYcTezxYgjO4nH4-PcYLRJl5jAfet4ZaaYovgh-KqXm32kWSlS64-JTcVDxVpSCK773X_65OCa64dmkFFo0B8XjebiGYHzYsHSN7GqmBD7A4Eefntnk2BnitlwjxJCZ8gcQWvYH0-MUb9kqpDiTn0KmEpr0mi1HIPLOYyQGmzyLElvYh1xC9DCyRUpgbumo2HcwEh6_xcPi38_zv8vf5fri12q5WJdGNl0qVdVysIOS2GoLaK2tLHAH0tSt5FrZShjrdNdI3dUuw0Nt1FDrQQg5GGPlYfF1N3cbp_sZKfU30xxDXtlXXb5VJ1TDM3W6ozYwYu-Dm1IEk93inTdTQOdzf9GqTqpaN20WfPsgyEzCp7SBmahfXV1-ZE92rIkTUUTXb6O_g_jcC96_Pq5_f5x8ASH2i50</recordid><startdate>20230601</startdate><enddate>20230601</enddate><creator>Alotaibi, Afnan</creator><creator>Rassam, Murad A</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ISR</scope><scope>4U-</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0003-3558-6737</orcidid></search><sort><creationdate>20230601</creationdate><title>Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks</title><author>Alotaibi, Afnan ; Rassam, Murad A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c368t-5270adb53e79daeddd2da0fa3c473095d21cdf9863984f270b4c5b49b113bccd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Analysis</topic><topic>Classification</topic><topic>Construction</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Detectors</topic><topic>Internet</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Packets (communication)</topic><topic>Research methodology</topic><topic>Researchers</topic><topic>Safety and security measures</topic><topic>Security software</topic><topic>Sustainability</topic><topic>Sustainable development</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Alotaibi, Afnan</creatorcontrib><creatorcontrib>Rassam, Murad A</creatorcontrib><collection>CrossRef</collection><collection>Gale In Context: Science</collection><collection>University Readers</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Sustainability</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Alotaibi, Afnan</au><au>Rassam, Murad A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks</atitle><jtitle>Sustainability</jtitle><date>2023-06-01</date><risdate>2023</risdate><volume>15</volume><issue>12</issue><spage>9801</spage><pages>9801-</pages><issn>2071-1050</issn><eissn>2071-1050</eissn><abstract>An intrusion detection system (IDS) is an effective tool for securing networks and a dependable technique for improving a user’s internet security. It informs the administration whenever strange conduct occurs. An IDS fundamentally depends on the classification of network packets as benign or attack. Moreover, IDSs can achieve better results when built with machine learning (ML)/deep learning (DL) techniques, such as convolutional neural networks (CNNs). However, there is a limitation when building a reliable IDS using ML/DL techniques, which is their vulnerability to adversarial attacks. Such attacks are crafted by attackers to compromise the ML/DL models, which affects their accuracy. Thus, this paper describes the construction of a sustainable IDS based on the CNN technique, and it presents a method for defense against adversarial attacks that enhances the IDS’s accuracy and ensures it is more reliable in performing classification. To achieve this goal, first, two IDS models with a convolutional neural network (CNN) were built to enhance the IDS accuracy. Second, seven adversarial attack scenarios were designed against the aforementioned CNN-based IDS models to test their reliability and efficiency. The experimental results show that the CNN-based IDS models achieved significant increases in the intrusion detection system accuracy of 97.51% and 95.43% compared with the scores before the adversarial scenarios were applied. Furthermore, it was revealed that the adversarial attacks caused the models’ accuracy to significantly decrease from one attack scenario to another. The Auto-PGD and BIM attacks had the strongest effect against the CNN-based IDS models, with accuracy drops of 2.92% and 3.46%, respectively. Third, this research applied the adversarial perturbation elimination with generative adversarial nets (APE_GAN++) defense method to enhance the accuracy of the CNN-based IDS models after they were affected by adversarial attacks, which was shown to increase after the adversarial attacks in an intelligible way, with accuracy scores ranging between 78.12% and 89.40%.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/su15129801</doi><orcidid>https://orcid.org/0000-0003-3558-6737</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2071-1050 |
ispartof | Sustainability, 2023-06, Vol.15 (12), p.9801 |
issn | 2071-1050 2071-1050 |
language | eng |
recordid | cdi_proquest_journals_2829881560 |
source | MDPI - Multidisciplinary Digital Publishing Institute; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals |
subjects | Accuracy Algorithms Analysis Classification Construction Datasets Deep learning Detectors Internet Machine learning Model accuracy Neural networks Packets (communication) Research methodology Researchers Safety and security measures Security software Sustainability Sustainable development |
title | Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T01%3A56%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhancing%20the%20Sustainability%20of%20Deep-Learning-Based%20Network%20Intrusion%20Detection%20Classifiers%20against%20Adversarial%20Attacks&rft.jtitle=Sustainability&rft.au=Alotaibi,%20Afnan&rft.date=2023-06-01&rft.volume=15&rft.issue=12&rft.spage=9801&rft.pages=9801-&rft.issn=2071-1050&rft.eissn=2071-1050&rft_id=info:doi/10.3390/su15129801&rft_dat=%3Cgale_proqu%3EA758354967%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2829881560&rft_id=info:pmid/&rft_galeid=A758354967&rfr_iscdi=true |