Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers

Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point. In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on cha...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Severi, Giorgio, Meyer, Jim, Coull, Scott, Oprea, Alina
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Severi, Giorgio
Meyer, Jim
Coull, Scott
Oprea, Alina
description Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point. In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and values to create effective backdoor triggers in a model-agnostic fashion. Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models and evaluate the effect of various constraints imposed on the attacker. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary's functionality, and we leverage similar behavior-preserving alteration methodologies for Android and PDF files. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these attacks, especially when the attacks blend in with the legitimate sample distribution.
doi_str_mv 10.48550/arxiv.2003.01031
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2003_01031</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2003_01031</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-ddc9386a8327b764cafb4453ccb773223e2123117510ac668fb673084a26ffe63</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKhwAUz4BhJsf47tjiEqpVIrGLpHX_xTWaROZQcodw8UpiO9w5EeQu44q6VpGvaA-Rw_asEY1Iwz4NdkszqfRkw4xylV6_fovKOPaN_cNGX6OsUypZgOtJ3nn1hoe8CYykx3OH5i9rQbsZQYos_lhlwFHIu__d8F2T-t9t1ztX1Zb7p2W6HSvHLOLsEoNCD0oJW0GAYpG7B20BqEAC-4AM51wxlapUwYlAZmJAoVglewIPd_txdLf8rxiPmr_zX1FxN8A-jMRoA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers</title><source>arXiv.org</source><creator>Severi, Giorgio ; Meyer, Jim ; Coull, Scott ; Oprea, Alina</creator><creatorcontrib>Severi, Giorgio ; Meyer, Jim ; Coull, Scott ; Oprea, Alina</creatorcontrib><description>Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point. In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and values to create effective backdoor triggers in a model-agnostic fashion. Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models and evaluate the effect of various constraints imposed on the attacker. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary's functionality, and we leverage similar behavior-preserving alteration methodologies for Android and PDF files. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these attacks, especially when the attacks blend in with the legitimate sample distribution.</description><identifier>DOI: 10.48550/arxiv.2003.01031</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2020-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2003.01031$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2003.01031$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Severi, Giorgio</creatorcontrib><creatorcontrib>Meyer, Jim</creatorcontrib><creatorcontrib>Coull, Scott</creatorcontrib><creatorcontrib>Oprea, Alina</creatorcontrib><title>Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers</title><description>Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point. In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and values to create effective backdoor triggers in a model-agnostic fashion. Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models and evaluate the effect of various constraints imposed on the attacker. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary's functionality, and we leverage similar behavior-preserving alteration methodologies for Android and PDF files. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these attacks, especially when the attacks blend in with the legitimate sample distribution.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKhwAUz4BhJsf47tjiEqpVIrGLpHX_xTWaROZQcodw8UpiO9w5EeQu44q6VpGvaA-Rw_asEY1Iwz4NdkszqfRkw4xylV6_fovKOPaN_cNGX6OsUypZgOtJ3nn1hoe8CYykx3OH5i9rQbsZQYos_lhlwFHIu__d8F2T-t9t1ztX1Zb7p2W6HSvHLOLsEoNCD0oJW0GAYpG7B20BqEAC-4AM51wxlapUwYlAZmJAoVglewIPd_txdLf8rxiPmr_zX1FxN8A-jMRoA</recordid><startdate>20200302</startdate><enddate>20200302</enddate><creator>Severi, Giorgio</creator><creator>Meyer, Jim</creator><creator>Coull, Scott</creator><creator>Oprea, Alina</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20200302</creationdate><title>Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers</title><author>Severi, Giorgio ; Meyer, Jim ; Coull, Scott ; Oprea, Alina</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-ddc9386a8327b764cafb4453ccb773223e2123117510ac668fb673084a26ffe63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Severi, Giorgio</creatorcontrib><creatorcontrib>Meyer, Jim</creatorcontrib><creatorcontrib>Coull, Scott</creatorcontrib><creatorcontrib>Oprea, Alina</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Severi, Giorgio</au><au>Meyer, Jim</au><au>Coull, Scott</au><au>Oprea, Alina</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers</atitle><date>2020-03-02</date><risdate>2020</risdate><abstract>Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point. In this paper, we study the susceptibility of feature-based ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and values to create effective backdoor triggers in a model-agnostic fashion. Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models and evaluate the effect of various constraints imposed on the attacker. To demonstrate the feasibility of our backdoor attacks in practice, we create a watermarking utility for Windows PE files that preserves the binary's functionality, and we leverage similar behavior-preserving alteration methodologies for Android and PDF files. Finally, we experiment with potential defensive strategies and show the difficulties of completely defending against these attacks, especially when the attacks blend in with the legitimate sample distribution.</abstract><doi>10.48550/arxiv.2003.01031</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2003.01031
ispartof
issn
language eng
recordid cdi_arxiv_primary_2003_01031
source arXiv.org
subjects Computer Science - Cryptography and Security
Computer Science - Learning
title Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-01T01%3A37%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Explanation-Guided%20Backdoor%20Poisoning%20Attacks%20Against%20Malware%20Classifiers&rft.au=Severi,%20Giorgio&rft.date=2020-03-02&rft_id=info:doi/10.48550/arxiv.2003.01031&rft_dat=%3Carxiv_GOX%3E2003_01031%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true