Effect of Balancing Data Using Synthetic Data on the Performance of Machine Learning Classifiers for Intrusion Detection in Computer Networks

Attacks on computer networks have increased significantly in recent days, due in part to the availability of sophisticated tools for launching such attacks as well as the thriving underground cyber-crime economy to support it. Over the past several years, researchers in academia and industry used ma...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2022, Vol.10, p.96731-96747
Hauptverfasser: Dina, Ayesha Siddiqua, Siddique, A. B., Manivannan, D.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 96747
container_issue
container_start_page 96731
container_title IEEE access
container_volume 10
creator Dina, Ayesha Siddiqua
Siddique, A. B.
Manivannan, D.
description Attacks on computer networks have increased significantly in recent days, due in part to the availability of sophisticated tools for launching such attacks as well as the thriving underground cyber-crime economy to support it. Over the past several years, researchers in academia and industry used machine learning (ML) techniques to design and implement Intrusion Detection Systems (IDSes) for computer networks. Many of these researchers used datasets collected by various organizations to train ML classifiers for detecting intrusions. In many of the datasets used in training ML classifiers in such systems, data are imbalanced (i.e., not all classes had equal number of samples). ML classifiers trained with such imbalanced datasets may produce unsatisfactory results. Traditionally, researchers used over-sampling and under-sampling for balancing data in datasets to overcome this problem. In this work, in addition to random over-sampling, we also used a synthetic data generation method, called Conditional Generative Adversarial Network (CTGAN), to balance data and study their effect on the performance of various widely used ML classifiers. To the best of our knowledge, no one else has used CTGAN to generate synthetic samples to balance intrusion detection datasets. Based on extensive experiments using widely used datasets NSL-KDD and UNSW-NB15, we found that training ML classifiers on datasets balanced with synthetic samples generated by CTGAN increased their prediction accuracy by up to 8% and improved their MCC score by up to 13%, compared to training the same ML classifiers over imbalanced datasets. We also show that this approach consistently performs better than some of the recently proposed state-of-the-art IDSes on both datasets. Our experiments also demonstrate that the accuracy of some ML classifiers trained over datasets balanced with random over-sampling decline compared to the same ML classifiers trained over original imbalanced dataset.
doi_str_mv 10.1109/ACCESS.2022.3205337
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_ACCESS_2022_3205337</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9882118</ieee_id><doaj_id>oai_doaj_org_article_582b4dc6fd3f4bb7b0a4a1dc9c439d2a</doaj_id><sourcerecordid>2716347058</sourcerecordid><originalsourceid>FETCH-LOGICAL-c338t-1429976b03871193494363b8883c0962c0492a006bb8e4b00ef228ff8a06cdaa3</originalsourceid><addsrcrecordid>eNpNUdtuGyEUXFWp1CjNF-QFqc92uC0Lj-nGbS25F8nNMwL2kODaiwtYkT-i_1y2G0XlhTmjmTmgaZobgpeEYHV71_er7XZJMaVLRnHLWPemuaREqAVrmbj4D79rrnPe4Xpkpdrusvmz8h5cQdGjj2ZvRhfGR3RvikEPeYLb81ieoAQ3k3FEdUQ_IPmYDlUOk_OrcU9hBLQBk8bJ1e9NzsEHSBlVIVqPJZ1yqO57KHXdhMKI-ng4ngok9A3Kc0y_8vvmrTf7DNcv91Xz8Gn1s_-y2Hz_vO7vNgvHmCwLwqlSnbCYyY4QxbjiTDArpWQOK0Ed5ooajIW1ErjFGDyl0ntpsHCDMeyqWc-5QzQ7fUzhYNJZRxP0PyKmR21S_fQedCup5YMTfmCeW9tZbLghg1OOMzXQKevDnHVM8fcJctG7eEpjfb6mHRGMd7iVVcVmlUsx5wT-dSvBeqpRzzXqqUb9UmN13cyuAACvDiUlJUSyvy7SmW4</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2716347058</pqid></control><display><type>article</type><title>Effect of Balancing Data Using Synthetic Data on the Performance of Machine Learning Classifiers for Intrusion Detection in Computer Networks</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Dina, Ayesha Siddiqua ; Siddique, A. B. ; Manivannan, D.</creator><creatorcontrib>Dina, Ayesha Siddiqua ; Siddique, A. B. ; Manivannan, D.</creatorcontrib><description>Attacks on computer networks have increased significantly in recent days, due in part to the availability of sophisticated tools for launching such attacks as well as the thriving underground cyber-crime economy to support it. Over the past several years, researchers in academia and industry used machine learning (ML) techniques to design and implement Intrusion Detection Systems (IDSes) for computer networks. Many of these researchers used datasets collected by various organizations to train ML classifiers for detecting intrusions. In many of the datasets used in training ML classifiers in such systems, data are imbalanced (i.e., not all classes had equal number of samples). ML classifiers trained with such imbalanced datasets may produce unsatisfactory results. Traditionally, researchers used over-sampling and under-sampling for balancing data in datasets to overcome this problem. In this work, in addition to random over-sampling, we also used a synthetic data generation method, called Conditional Generative Adversarial Network (CTGAN), to balance data and study their effect on the performance of various widely used ML classifiers. To the best of our knowledge, no one else has used CTGAN to generate synthetic samples to balance intrusion detection datasets. Based on extensive experiments using widely used datasets NSL-KDD and UNSW-NB15, we found that training ML classifiers on datasets balanced with synthetic samples generated by CTGAN increased their prediction accuracy by up to 8% and improved their MCC score by up to 13%, compared to training the same ML classifiers over imbalanced datasets. We also show that this approach consistently performs better than some of the recently proposed state-of-the-art IDSes on both datasets. Our experiments also demonstrate that the accuracy of some ML classifiers trained over datasets balanced with random over-sampling decline compared to the same ML classifiers trained over original imbalanced dataset.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2022.3205337</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Balancing ; Behavioral sciences ; Classifiers ; Computational modeling ; Computer networks ; conditional generative adversarial network (CTGAN) ; Crime ; Cyber security ; Cyberattack ; data imbalance problem ; Data models ; Datasets ; Generative adversarial networks ; Intrusion detection ; Intrusion detection systems ; Machine learning ; over-sampling ; Sampling ; Synthetic data ; Training ; Training data ; under-sampling</subject><ispartof>IEEE access, 2022, Vol.10, p.96731-96747</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c338t-1429976b03871193494363b8883c0962c0492a006bb8e4b00ef228ff8a06cdaa3</citedby><cites>FETCH-LOGICAL-c338t-1429976b03871193494363b8883c0962c0492a006bb8e4b00ef228ff8a06cdaa3</cites><orcidid>0000-0001-9895-3085</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9882118$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Dina, Ayesha Siddiqua</creatorcontrib><creatorcontrib>Siddique, A. B.</creatorcontrib><creatorcontrib>Manivannan, D.</creatorcontrib><title>Effect of Balancing Data Using Synthetic Data on the Performance of Machine Learning Classifiers for Intrusion Detection in Computer Networks</title><title>IEEE access</title><addtitle>Access</addtitle><description>Attacks on computer networks have increased significantly in recent days, due in part to the availability of sophisticated tools for launching such attacks as well as the thriving underground cyber-crime economy to support it. Over the past several years, researchers in academia and industry used machine learning (ML) techniques to design and implement Intrusion Detection Systems (IDSes) for computer networks. Many of these researchers used datasets collected by various organizations to train ML classifiers for detecting intrusions. In many of the datasets used in training ML classifiers in such systems, data are imbalanced (i.e., not all classes had equal number of samples). ML classifiers trained with such imbalanced datasets may produce unsatisfactory results. Traditionally, researchers used over-sampling and under-sampling for balancing data in datasets to overcome this problem. In this work, in addition to random over-sampling, we also used a synthetic data generation method, called Conditional Generative Adversarial Network (CTGAN), to balance data and study their effect on the performance of various widely used ML classifiers. To the best of our knowledge, no one else has used CTGAN to generate synthetic samples to balance intrusion detection datasets. Based on extensive experiments using widely used datasets NSL-KDD and UNSW-NB15, we found that training ML classifiers on datasets balanced with synthetic samples generated by CTGAN increased their prediction accuracy by up to 8% and improved their MCC score by up to 13%, compared to training the same ML classifiers over imbalanced datasets. We also show that this approach consistently performs better than some of the recently proposed state-of-the-art IDSes on both datasets. Our experiments also demonstrate that the accuracy of some ML classifiers trained over datasets balanced with random over-sampling decline compared to the same ML classifiers trained over original imbalanced dataset.</description><subject>Balancing</subject><subject>Behavioral sciences</subject><subject>Classifiers</subject><subject>Computational modeling</subject><subject>Computer networks</subject><subject>conditional generative adversarial network (CTGAN)</subject><subject>Crime</subject><subject>Cyber security</subject><subject>Cyberattack</subject><subject>data imbalance problem</subject><subject>Data models</subject><subject>Datasets</subject><subject>Generative adversarial networks</subject><subject>Intrusion detection</subject><subject>Intrusion detection systems</subject><subject>Machine learning</subject><subject>over-sampling</subject><subject>Sampling</subject><subject>Synthetic data</subject><subject>Training</subject><subject>Training data</subject><subject>under-sampling</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNUdtuGyEUXFWp1CjNF-QFqc92uC0Lj-nGbS25F8nNMwL2kODaiwtYkT-i_1y2G0XlhTmjmTmgaZobgpeEYHV71_er7XZJMaVLRnHLWPemuaREqAVrmbj4D79rrnPe4Xpkpdrusvmz8h5cQdGjj2ZvRhfGR3RvikEPeYLb81ieoAQ3k3FEdUQ_IPmYDlUOk_OrcU9hBLQBk8bJ1e9NzsEHSBlVIVqPJZ1yqO57KHXdhMKI-ng4ngok9A3Kc0y_8vvmrTf7DNcv91Xz8Gn1s_-y2Hz_vO7vNgvHmCwLwqlSnbCYyY4QxbjiTDArpWQOK0Ed5ooajIW1ErjFGDyl0ntpsHCDMeyqWc-5QzQ7fUzhYNJZRxP0PyKmR21S_fQedCup5YMTfmCeW9tZbLghg1OOMzXQKevDnHVM8fcJctG7eEpjfb6mHRGMd7iVVcVmlUsx5wT-dSvBeqpRzzXqqUb9UmN13cyuAACvDiUlJUSyvy7SmW4</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Dina, Ayesha Siddiqua</creator><creator>Siddique, A. B.</creator><creator>Manivannan, D.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-9895-3085</orcidid></search><sort><creationdate>2022</creationdate><title>Effect of Balancing Data Using Synthetic Data on the Performance of Machine Learning Classifiers for Intrusion Detection in Computer Networks</title><author>Dina, Ayesha Siddiqua ; Siddique, A. B. ; Manivannan, D.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c338t-1429976b03871193494363b8883c0962c0492a006bb8e4b00ef228ff8a06cdaa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Balancing</topic><topic>Behavioral sciences</topic><topic>Classifiers</topic><topic>Computational modeling</topic><topic>Computer networks</topic><topic>conditional generative adversarial network (CTGAN)</topic><topic>Crime</topic><topic>Cyber security</topic><topic>Cyberattack</topic><topic>data imbalance problem</topic><topic>Data models</topic><topic>Datasets</topic><topic>Generative adversarial networks</topic><topic>Intrusion detection</topic><topic>Intrusion detection systems</topic><topic>Machine learning</topic><topic>over-sampling</topic><topic>Sampling</topic><topic>Synthetic data</topic><topic>Training</topic><topic>Training data</topic><topic>under-sampling</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Dina, Ayesha Siddiqua</creatorcontrib><creatorcontrib>Siddique, A. B.</creatorcontrib><creatorcontrib>Manivannan, D.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dina, Ayesha Siddiqua</au><au>Siddique, A. B.</au><au>Manivannan, D.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Effect of Balancing Data Using Synthetic Data on the Performance of Machine Learning Classifiers for Intrusion Detection in Computer Networks</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2022</date><risdate>2022</risdate><volume>10</volume><spage>96731</spage><epage>96747</epage><pages>96731-96747</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Attacks on computer networks have increased significantly in recent days, due in part to the availability of sophisticated tools for launching such attacks as well as the thriving underground cyber-crime economy to support it. Over the past several years, researchers in academia and industry used machine learning (ML) techniques to design and implement Intrusion Detection Systems (IDSes) for computer networks. Many of these researchers used datasets collected by various organizations to train ML classifiers for detecting intrusions. In many of the datasets used in training ML classifiers in such systems, data are imbalanced (i.e., not all classes had equal number of samples). ML classifiers trained with such imbalanced datasets may produce unsatisfactory results. Traditionally, researchers used over-sampling and under-sampling for balancing data in datasets to overcome this problem. In this work, in addition to random over-sampling, we also used a synthetic data generation method, called Conditional Generative Adversarial Network (CTGAN), to balance data and study their effect on the performance of various widely used ML classifiers. To the best of our knowledge, no one else has used CTGAN to generate synthetic samples to balance intrusion detection datasets. Based on extensive experiments using widely used datasets NSL-KDD and UNSW-NB15, we found that training ML classifiers on datasets balanced with synthetic samples generated by CTGAN increased their prediction accuracy by up to 8% and improved their MCC score by up to 13%, compared to training the same ML classifiers over imbalanced datasets. We also show that this approach consistently performs better than some of the recently proposed state-of-the-art IDSes on both datasets. Our experiments also demonstrate that the accuracy of some ML classifiers trained over datasets balanced with random over-sampling decline compared to the same ML classifiers trained over original imbalanced dataset.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2022.3205337</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0001-9895-3085</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2022, Vol.10, p.96731-96747
issn 2169-3536
2169-3536
language eng
recordid cdi_crossref_primary_10_1109_ACCESS_2022_3205337
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals
subjects Balancing
Behavioral sciences
Classifiers
Computational modeling
Computer networks
conditional generative adversarial network (CTGAN)
Crime
Cyber security
Cyberattack
data imbalance problem
Data models
Datasets
Generative adversarial networks
Intrusion detection
Intrusion detection systems
Machine learning
over-sampling
Sampling
Synthetic data
Training
Training data
under-sampling
title Effect of Balancing Data Using Synthetic Data on the Performance of Machine Learning Classifiers for Intrusion Detection in Computer Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T23%3A05%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Effect%20of%20Balancing%20Data%20Using%20Synthetic%20Data%20on%20the%20Performance%20of%20Machine%20Learning%20Classifiers%20for%20Intrusion%20Detection%20in%20Computer%20Networks&rft.jtitle=IEEE%20access&rft.au=Dina,%20Ayesha%20Siddiqua&rft.date=2022&rft.volume=10&rft.spage=96731&rft.epage=96747&rft.pages=96731-96747&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2022.3205337&rft_dat=%3Cproquest_cross%3E2716347058%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2716347058&rft_id=info:pmid/&rft_ieee_id=9882118&rft_doaj_id=oai_doaj_org_article_582b4dc6fd3f4bb7b0a4a1dc9c439d2a&rfr_iscdi=true