Adversarial examples for network intrusion detection systems
Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adver...
Gespeichert in:
Veröffentlicht in: | Journal of computer security 2022-01, Vol.30 (5), p.727-752 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 752 |
---|---|
container_issue | 5 |
container_start_page | 727 |
container_title | Journal of computer security |
container_volume | 30 |
creator | Sheatsley, Ryan Papernot, Nicolas Weisman, Michael J. Verma, Gunjan McDaniel, Patrick |
description | Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adversary exploits the unconstrained nature of images–the adversary is free to select some arbitrary amount of pixels to perturb. However, it is not clear how these attacks translate to domains such as network intrusion detection as they contain domain constraints, which limit which and how features can be modified by the adversary. In this paper, we explore whether the constrained nature of networks offers additional robustness against adversarial examples versus the unconstrained nature of images. We do this by creating two algorithms: (1) the Adapative-JSMA, an augmented version of the popular JSMA which obeys domain constraints, and (2) the Histogram Sketch Generation which generates adversarial sketches: targeted universal perturbation vectors that encode feature saliency within the envelope of domain constraints. To assess how these algorithms perform, we evaluate them in a constrained network intrusion detection setting and an unconstrained image recognition setting. The results show that our approaches generate misclassification rates in network intrusion detection applications that were comparable to those of image recognition applications (greater than 95%). Our investigation shows that the constrained attack surface exposed by network intrusion detection systems is still sufficiently large to craft successful adversarial examples – and thus, network constraints do not appear to add robustness against adversarial examples. Indeed, even if a defender constrains an adversary to as little as five random features, generating adversarial examples is still possible. |
doi_str_mv | 10.3233/JCS-210094 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2722307676</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2722307676</sourcerecordid><originalsourceid>FETCH-LOGICAL-c259t-4d0d651d07d3b968303af26c8fccd0f6a68db511bd29ac6b0b39ad0892b0f12a3</originalsourceid><addsrcrecordid>eNotkE1LAzEYhIMouFYv_oIFb0L0zZtudgNeSrF-UPCggreQzQds3Y-aZNX-e7fU08xhmBkeQi4Z3HDk_PZ5-UqRAcj5EclYVRa0kjg_JhlIFBSx_DglZzFuAJAxWWXkbmG_XYg6NLrN3a_utq2LuR9C3rv0M4TPvOlTGGMz9Ll1yZm0d3EXk-viOTnxuo3u4l9n5H11_7Z8pOuXh6flYk0NFjLRuQUrCmahtLyWouLAtUdhKm-MBS-0qGxdMFZblNqIGmoutYXpeQ2eoeYzcnXo3Ybha3Qxqc0whn6aVFgicihFKabU9SFlwhBjcF5tQ9PpsFMM1J6OmuioAx3-B6eZV8I</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2722307676</pqid></control><display><type>article</type><title>Adversarial examples for network intrusion detection systems</title><source>Business Source Complete</source><creator>Sheatsley, Ryan ; Papernot, Nicolas ; Weisman, Michael J. ; Verma, Gunjan ; McDaniel, Patrick</creator><creatorcontrib>Sheatsley, Ryan ; Papernot, Nicolas ; Weisman, Michael J. ; Verma, Gunjan ; McDaniel, Patrick</creatorcontrib><description>Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adversary exploits the unconstrained nature of images–the adversary is free to select some arbitrary amount of pixels to perturb. However, it is not clear how these attacks translate to domains such as network intrusion detection as they contain domain constraints, which limit which and how features can be modified by the adversary. In this paper, we explore whether the constrained nature of networks offers additional robustness against adversarial examples versus the unconstrained nature of images. We do this by creating two algorithms: (1) the Adapative-JSMA, an augmented version of the popular JSMA which obeys domain constraints, and (2) the Histogram Sketch Generation which generates adversarial sketches: targeted universal perturbation vectors that encode feature saliency within the envelope of domain constraints. To assess how these algorithms perform, we evaluate them in a constrained network intrusion detection setting and an unconstrained image recognition setting. The results show that our approaches generate misclassification rates in network intrusion detection applications that were comparable to those of image recognition applications (greater than 95%). Our investigation shows that the constrained attack surface exposed by network intrusion detection systems is still sufficiently large to craft successful adversarial examples – and thus, network constraints do not appear to add robustness against adversarial examples. Indeed, even if a defender constrains an adversary to as little as five random features, generating adversarial examples is still possible.</description><identifier>ISSN: 0926-227X</identifier><identifier>EISSN: 1875-8924</identifier><identifier>DOI: 10.3233/JCS-210094</identifier><language>eng</language><publisher>Amsterdam: IOS Press BV</publisher><subject>Algorithms ; Constraints ; Domains ; Histograms ; Intrusion detection systems ; Machine learning ; Object recognition ; Perturbation ; Robustness ; Service introduction ; Sketches</subject><ispartof>Journal of computer security, 2022-01, Vol.30 (5), p.727-752</ispartof><rights>Copyright IOS Press BV 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c259t-4d0d651d07d3b968303af26c8fccd0f6a68db511bd29ac6b0b39ad0892b0f12a3</citedby><cites>FETCH-LOGICAL-c259t-4d0d651d07d3b968303af26c8fccd0f6a68db511bd29ac6b0b39ad0892b0f12a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Sheatsley, Ryan</creatorcontrib><creatorcontrib>Papernot, Nicolas</creatorcontrib><creatorcontrib>Weisman, Michael J.</creatorcontrib><creatorcontrib>Verma, Gunjan</creatorcontrib><creatorcontrib>McDaniel, Patrick</creatorcontrib><title>Adversarial examples for network intrusion detection systems</title><title>Journal of computer security</title><description>Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adversary exploits the unconstrained nature of images–the adversary is free to select some arbitrary amount of pixels to perturb. However, it is not clear how these attacks translate to domains such as network intrusion detection as they contain domain constraints, which limit which and how features can be modified by the adversary. In this paper, we explore whether the constrained nature of networks offers additional robustness against adversarial examples versus the unconstrained nature of images. We do this by creating two algorithms: (1) the Adapative-JSMA, an augmented version of the popular JSMA which obeys domain constraints, and (2) the Histogram Sketch Generation which generates adversarial sketches: targeted universal perturbation vectors that encode feature saliency within the envelope of domain constraints. To assess how these algorithms perform, we evaluate them in a constrained network intrusion detection setting and an unconstrained image recognition setting. The results show that our approaches generate misclassification rates in network intrusion detection applications that were comparable to those of image recognition applications (greater than 95%). Our investigation shows that the constrained attack surface exposed by network intrusion detection systems is still sufficiently large to craft successful adversarial examples – and thus, network constraints do not appear to add robustness against adversarial examples. Indeed, even if a defender constrains an adversary to as little as five random features, generating adversarial examples is still possible.</description><subject>Algorithms</subject><subject>Constraints</subject><subject>Domains</subject><subject>Histograms</subject><subject>Intrusion detection systems</subject><subject>Machine learning</subject><subject>Object recognition</subject><subject>Perturbation</subject><subject>Robustness</subject><subject>Service introduction</subject><subject>Sketches</subject><issn>0926-227X</issn><issn>1875-8924</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNotkE1LAzEYhIMouFYv_oIFb0L0zZtudgNeSrF-UPCggreQzQds3Y-aZNX-e7fU08xhmBkeQi4Z3HDk_PZ5-UqRAcj5EclYVRa0kjg_JhlIFBSx_DglZzFuAJAxWWXkbmG_XYg6NLrN3a_utq2LuR9C3rv0M4TPvOlTGGMz9Ll1yZm0d3EXk-viOTnxuo3u4l9n5H11_7Z8pOuXh6flYk0NFjLRuQUrCmahtLyWouLAtUdhKm-MBS-0qGxdMFZblNqIGmoutYXpeQ2eoeYzcnXo3Ybha3Qxqc0whn6aVFgicihFKabU9SFlwhBjcF5tQ9PpsFMM1J6OmuioAx3-B6eZV8I</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Sheatsley, Ryan</creator><creator>Papernot, Nicolas</creator><creator>Weisman, Michael J.</creator><creator>Verma, Gunjan</creator><creator>McDaniel, Patrick</creator><general>IOS Press BV</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20220101</creationdate><title>Adversarial examples for network intrusion detection systems</title><author>Sheatsley, Ryan ; Papernot, Nicolas ; Weisman, Michael J. ; Verma, Gunjan ; McDaniel, Patrick</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c259t-4d0d651d07d3b968303af26c8fccd0f6a68db511bd29ac6b0b39ad0892b0f12a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Constraints</topic><topic>Domains</topic><topic>Histograms</topic><topic>Intrusion detection systems</topic><topic>Machine learning</topic><topic>Object recognition</topic><topic>Perturbation</topic><topic>Robustness</topic><topic>Service introduction</topic><topic>Sketches</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sheatsley, Ryan</creatorcontrib><creatorcontrib>Papernot, Nicolas</creatorcontrib><creatorcontrib>Weisman, Michael J.</creatorcontrib><creatorcontrib>Verma, Gunjan</creatorcontrib><creatorcontrib>McDaniel, Patrick</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of computer security</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sheatsley, Ryan</au><au>Papernot, Nicolas</au><au>Weisman, Michael J.</au><au>Verma, Gunjan</au><au>McDaniel, Patrick</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial examples for network intrusion detection systems</atitle><jtitle>Journal of computer security</jtitle><date>2022-01-01</date><risdate>2022</risdate><volume>30</volume><issue>5</issue><spage>727</spage><epage>752</epage><pages>727-752</pages><issn>0926-227X</issn><eissn>1875-8924</eissn><abstract>Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adversary exploits the unconstrained nature of images–the adversary is free to select some arbitrary amount of pixels to perturb. However, it is not clear how these attacks translate to domains such as network intrusion detection as they contain domain constraints, which limit which and how features can be modified by the adversary. In this paper, we explore whether the constrained nature of networks offers additional robustness against adversarial examples versus the unconstrained nature of images. We do this by creating two algorithms: (1) the Adapative-JSMA, an augmented version of the popular JSMA which obeys domain constraints, and (2) the Histogram Sketch Generation which generates adversarial sketches: targeted universal perturbation vectors that encode feature saliency within the envelope of domain constraints. To assess how these algorithms perform, we evaluate them in a constrained network intrusion detection setting and an unconstrained image recognition setting. The results show that our approaches generate misclassification rates in network intrusion detection applications that were comparable to those of image recognition applications (greater than 95%). Our investigation shows that the constrained attack surface exposed by network intrusion detection systems is still sufficiently large to craft successful adversarial examples – and thus, network constraints do not appear to add robustness against adversarial examples. Indeed, even if a defender constrains an adversary to as little as five random features, generating adversarial examples is still possible.</abstract><cop>Amsterdam</cop><pub>IOS Press BV</pub><doi>10.3233/JCS-210094</doi><tpages>26</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0926-227X |
ispartof | Journal of computer security, 2022-01, Vol.30 (5), p.727-752 |
issn | 0926-227X 1875-8924 |
language | eng |
recordid | cdi_proquest_journals_2722307676 |
source | Business Source Complete |
subjects | Algorithms Constraints Domains Histograms Intrusion detection systems Machine learning Object recognition Perturbation Robustness Service introduction Sketches |
title | Adversarial examples for network intrusion detection systems |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T05%3A04%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20examples%20for%20network%20intrusion%20detection%20systems&rft.jtitle=Journal%20of%20computer%20security&rft.au=Sheatsley,%20Ryan&rft.date=2022-01-01&rft.volume=30&rft.issue=5&rft.spage=727&rft.epage=752&rft.pages=727-752&rft.issn=0926-227X&rft.eissn=1875-8924&rft_id=info:doi/10.3233/JCS-210094&rft_dat=%3Cproquest_cross%3E2722307676%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2722307676&rft_id=info:pmid/&rfr_iscdi=true |