Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks

Machine Learning (ML) models are known to be vulnerable to adversarial inputs and researchers have demonstrated that even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible. These systems represent a target for bad actors. Their disruption can cause real phy...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2021-01
Hauptverfasser: Dotter, Marissa, Xie, Sherry, Manville, Keith, Harguess, Josh, Busho, Colin, Rodriguez, Mikel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Dotter, Marissa
Xie, Sherry
Manville, Keith
Harguess, Josh
Busho, Colin
Rodriguez, Mikel
description Machine Learning (ML) models are known to be vulnerable to adversarial inputs and researchers have demonstrated that even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible. These systems represent a target for bad actors. Their disruption can cause real physical and economic harm. When attacks on production ML systems occur, the ability to attribute the attack to the responsible threat group is a critical step in formulating a response and holding the attackers accountable. We pose the following question: can adversarially perturbed inputs be attributed to the particular methods used to generate the attack? In other words, is there a way to find a signal in these attacks that exposes the attack algorithm, model architecture, or hyperparameters used in the attack? We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks. We find that it is possible to differentiate attacks generated with different attack algorithms, models, and hyperparameters on both the CIFAR-10 and MNIST datasets.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2476743589</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2476743589</sourcerecordid><originalsourceid>FETCH-proquest_journals_24767435893</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSIckwpSy0qTizKTMxRcCwpSUzOBlFFmUmlJZn5eVYKLpnFyflAJZl56XCJxKScVIXgzPS8xJxihcw8BWQzfH2gxhTzMLCmARWk8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSIpAZ8UYm5mbmJsamFpbGxKkCAE62Qtw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2476743589</pqid></control><display><type>article</type><title>Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks</title><source>Free E- Journals</source><creator>Dotter, Marissa ; Xie, Sherry ; Manville, Keith ; Harguess, Josh ; Busho, Colin ; Rodriguez, Mikel</creator><creatorcontrib>Dotter, Marissa ; Xie, Sherry ; Manville, Keith ; Harguess, Josh ; Busho, Colin ; Rodriguez, Mikel</creatorcontrib><description>Machine Learning (ML) models are known to be vulnerable to adversarial inputs and researchers have demonstrated that even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible. These systems represent a target for bad actors. Their disruption can cause real physical and economic harm. When attacks on production ML systems occur, the ability to attribute the attack to the responsible threat group is a critical step in formulating a response and holding the attackers accountable. We pose the following question: can adversarially perturbed inputs be attributed to the particular methods used to generate the attack? In other words, is there a way to find a signal in these attacks that exposes the attack algorithm, model architecture, or hyperparameters used in the attack? We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks. We find that it is possible to differentiate attacks generated with different attack algorithms, models, and hyperparameters on both the CIFAR-10 and MNIST datasets.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Autonomous cars ; Machine learning</subject><ispartof>arXiv.org, 2021-01</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Dotter, Marissa</creatorcontrib><creatorcontrib>Xie, Sherry</creatorcontrib><creatorcontrib>Manville, Keith</creatorcontrib><creatorcontrib>Harguess, Josh</creatorcontrib><creatorcontrib>Busho, Colin</creatorcontrib><creatorcontrib>Rodriguez, Mikel</creatorcontrib><title>Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks</title><title>arXiv.org</title><description>Machine Learning (ML) models are known to be vulnerable to adversarial inputs and researchers have demonstrated that even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible. These systems represent a target for bad actors. Their disruption can cause real physical and economic harm. When attacks on production ML systems occur, the ability to attribute the attack to the responsible threat group is a critical step in formulating a response and holding the attackers accountable. We pose the following question: can adversarially perturbed inputs be attributed to the particular methods used to generate the attack? In other words, is there a way to find a signal in these attacks that exposes the attack algorithm, model architecture, or hyperparameters used in the attack? We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks. We find that it is possible to differentiate attacks generated with different attack algorithms, models, and hyperparameters on both the CIFAR-10 and MNIST datasets.</description><subject>Algorithms</subject><subject>Autonomous cars</subject><subject>Machine learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSIckwpSy0qTizKTMxRcCwpSUzOBlFFmUmlJZn5eVYKLpnFyflAJZl56XCJxKScVIXgzPS8xJxihcw8BWQzfH2gxhTzMLCmARWk8kJpbgZlN9cQZw_dgqL8wtLU4pL4rPzSIpAZ8UYm5mbmJsamFpbGxKkCAE62Qtw</recordid><startdate>20210108</startdate><enddate>20210108</enddate><creator>Dotter, Marissa</creator><creator>Xie, Sherry</creator><creator>Manville, Keith</creator><creator>Harguess, Josh</creator><creator>Busho, Colin</creator><creator>Rodriguez, Mikel</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210108</creationdate><title>Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks</title><author>Dotter, Marissa ; Xie, Sherry ; Manville, Keith ; Harguess, Josh ; Busho, Colin ; Rodriguez, Mikel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24767435893</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Autonomous cars</topic><topic>Machine learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Dotter, Marissa</creatorcontrib><creatorcontrib>Xie, Sherry</creatorcontrib><creatorcontrib>Manville, Keith</creatorcontrib><creatorcontrib>Harguess, Josh</creatorcontrib><creatorcontrib>Busho, Colin</creatorcontrib><creatorcontrib>Rodriguez, Mikel</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Dotter, Marissa</au><au>Xie, Sherry</au><au>Manville, Keith</au><au>Harguess, Josh</au><au>Busho, Colin</au><au>Rodriguez, Mikel</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks</atitle><jtitle>arXiv.org</jtitle><date>2021-01-08</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Machine Learning (ML) models are known to be vulnerable to adversarial inputs and researchers have demonstrated that even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible. These systems represent a target for bad actors. Their disruption can cause real physical and economic harm. When attacks on production ML systems occur, the ability to attribute the attack to the responsible threat group is a critical step in formulating a response and holding the attackers accountable. We pose the following question: can adversarially perturbed inputs be attributed to the particular methods used to generate the attack? In other words, is there a way to find a signal in these attacks that exposes the attack algorithm, model architecture, or hyperparameters used in the attack? We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks. We find that it is possible to differentiate attacks generated with different attack algorithms, models, and hyperparameters on both the CIFAR-10 and MNIST datasets.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2021-01
issn 2331-8422
language eng
recordid cdi_proquest_journals_2476743589
source Free E- Journals
subjects Algorithms
Autonomous cars
Machine learning
title Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T12%3A54%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Adversarial%20Attack%20Attribution:%20Discovering%20Attributable%20Signals%20in%20Adversarial%20ML%20Attacks&rft.jtitle=arXiv.org&rft.au=Dotter,%20Marissa&rft.date=2021-01-08&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2476743589%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2476743589&rft_id=info:pmid/&rfr_iscdi=true