Machine Learning Security in Industry: A Quantitative Survey
Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and concern and evaluate sta...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-03 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Grosse, Kathrin Bieringer, Lukas Besold, Tarek Richard Biggio, Battista Krombholz, Katharina |
description | Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and concern and evaluate statistical hypotheses on factors influencing threat perception and exposure. Our results shed light on real-world attacks on deployed machine learning. On the organizational level, while we find no predictors for threat exposure in our sample, the amount of implement defenses depends on exposure to threats or expected likelihood to become a target. We also provide a detailed analysis of practitioners' replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage, and bias introduction into models. Finally, we find that on the individual level, prior knowledge about machine learning security influences threat perception. Our work paves the way for more research about adversarial machine learning in practice, but yields also insights for regulation and auditing. |
doi_str_mv | 10.48550/arxiv.2207.05164 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2207_05164</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2688767456</sourcerecordid><originalsourceid>FETCH-LOGICAL-a954-70d66ab6f3ce45b4a90653e5a5320cb6cfdbb3bd1f1135847716502751482e063</originalsourceid><addsrcrecordid>eNotj8tKAzEYRoMgWGofwJUB1zPm9iepuCnFS2FEpN0PyUxGUzStmWRw3t7auvo2h49zELqipBQagNya-OOHkjGiSgJUijM0YZzTQgvGLtCs77eEECYVA-ATdP9img8fHK6cicGHd7x2TY4-jdgHvApt7lMc7_ACv2UTkk8m-cHhdY6DGy_ReWc-ezf73ynaPD5sls9F9fq0Wi6qwsxBFIq0UhorO944AVaYOZHAHRjgjDRWNl1rLbct7SjloIVSVAJhCqjQzBHJp-j6dHtMq_fRf5k41n-J9THxQNyciH3cfWfXp3q7yzEcnGomtVZSCZD8FwP9Um0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2688767456</pqid></control><display><type>article</type><title>Machine Learning Security in Industry: A Quantitative Survey</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Grosse, Kathrin ; Bieringer, Lukas ; Besold, Tarek Richard ; Biggio, Battista ; Krombholz, Katharina</creator><creatorcontrib>Grosse, Kathrin ; Bieringer, Lukas ; Besold, Tarek Richard ; Biggio, Battista ; Krombholz, Katharina</creatorcontrib><description>Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and concern and evaluate statistical hypotheses on factors influencing threat perception and exposure. Our results shed light on real-world attacks on deployed machine learning. On the organizational level, while we find no predictors for threat exposure in our sample, the amount of implement defenses depends on exposure to threats or expected likelihood to become a target. We also provide a detailed analysis of practitioners' replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage, and bias introduction into models. Finally, we find that on the individual level, prior knowledge about machine learning security influences threat perception. Our work paves the way for more research about adversarial machine learning in practice, but yields also insights for regulation and auditing.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2207.05164</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Computers and Society ; Computer Science - Cryptography and Security ; Computer Science - Learning ; Decision making ; Exposure ; Machine learning ; Perception ; Security</subject><ispartof>arXiv.org, 2023-03</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27902</link.rule.ids><backlink>$$Uhttps://doi.org/10.1109/TIFS.2023.3251842$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.2207.05164$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Grosse, Kathrin</creatorcontrib><creatorcontrib>Bieringer, Lukas</creatorcontrib><creatorcontrib>Besold, Tarek Richard</creatorcontrib><creatorcontrib>Biggio, Battista</creatorcontrib><creatorcontrib>Krombholz, Katharina</creatorcontrib><title>Machine Learning Security in Industry: A Quantitative Survey</title><title>arXiv.org</title><description>Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and concern and evaluate statistical hypotheses on factors influencing threat perception and exposure. Our results shed light on real-world attacks on deployed machine learning. On the organizational level, while we find no predictors for threat exposure in our sample, the amount of implement defenses depends on exposure to threats or expected likelihood to become a target. We also provide a detailed analysis of practitioners' replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage, and bias introduction into models. Finally, we find that on the individual level, prior knowledge about machine learning security influences threat perception. Our work paves the way for more research about adversarial machine learning in practice, but yields also insights for regulation and auditing.</description><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><subject>Decision making</subject><subject>Exposure</subject><subject>Machine learning</subject><subject>Perception</subject><subject>Security</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotj8tKAzEYRoMgWGofwJUB1zPm9iepuCnFS2FEpN0PyUxGUzStmWRw3t7auvo2h49zELqipBQagNya-OOHkjGiSgJUijM0YZzTQgvGLtCs77eEECYVA-ATdP9img8fHK6cicGHd7x2TY4-jdgHvApt7lMc7_ACv2UTkk8m-cHhdY6DGy_ReWc-ezf73ynaPD5sls9F9fq0Wi6qwsxBFIq0UhorO944AVaYOZHAHRjgjDRWNl1rLbct7SjloIVSVAJhCqjQzBHJp-j6dHtMq_fRf5k41n-J9THxQNyciH3cfWfXp3q7yzEcnGomtVZSCZD8FwP9Um0</recordid><startdate>20230310</startdate><enddate>20230310</enddate><creator>Grosse, Kathrin</creator><creator>Bieringer, Lukas</creator><creator>Besold, Tarek Richard</creator><creator>Biggio, Battista</creator><creator>Krombholz, Katharina</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230310</creationdate><title>Machine Learning Security in Industry: A Quantitative Survey</title><author>Grosse, Kathrin ; Bieringer, Lukas ; Besold, Tarek Richard ; Biggio, Battista ; Krombholz, Katharina</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a954-70d66ab6f3ce45b4a90653e5a5320cb6cfdbb3bd1f1135847716502751482e063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><topic>Decision making</topic><topic>Exposure</topic><topic>Machine learning</topic><topic>Perception</topic><topic>Security</topic><toplevel>online_resources</toplevel><creatorcontrib>Grosse, Kathrin</creatorcontrib><creatorcontrib>Bieringer, Lukas</creatorcontrib><creatorcontrib>Besold, Tarek Richard</creatorcontrib><creatorcontrib>Biggio, Battista</creatorcontrib><creatorcontrib>Krombholz, Katharina</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Grosse, Kathrin</au><au>Bieringer, Lukas</au><au>Besold, Tarek Richard</au><au>Biggio, Battista</au><au>Krombholz, Katharina</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Machine Learning Security in Industry: A Quantitative Survey</atitle><jtitle>arXiv.org</jtitle><date>2023-03-10</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Despite the large body of academic work on machine learning security, little is known about the occurrence of attacks on machine learning systems in the wild. In this paper, we report on a quantitative study with 139 industrial practitioners. We analyze attack occurrence and concern and evaluate statistical hypotheses on factors influencing threat perception and exposure. Our results shed light on real-world attacks on deployed machine learning. On the organizational level, while we find no predictors for threat exposure in our sample, the amount of implement defenses depends on exposure to threats or expected likelihood to become a target. We also provide a detailed analysis of practitioners' replies on the relevance of individual machine learning attacks, unveiling complex concerns like unreliable decision making, business information leakage, and bias introduction into models. Finally, we find that on the individual level, prior knowledge about machine learning security influences threat perception. Our work paves the way for more research about adversarial machine learning in practice, but yields also insights for regulation and auditing.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2207.05164</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2207_05164 |
source | arXiv.org; Free E- Journals |
subjects | Computer Science - Computers and Society Computer Science - Cryptography and Security Computer Science - Learning Decision making Exposure Machine learning Perception Security |
title | Machine Learning Security in Industry: A Quantitative Survey |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T17%3A59%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Machine%20Learning%20Security%20in%20Industry:%20A%20Quantitative%20Survey&rft.jtitle=arXiv.org&rft.au=Grosse,%20Kathrin&rft.date=2023-03-10&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2207.05164&rft_dat=%3Cproquest_arxiv%3E2688767456%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2688767456&rft_id=info:pmid/&rfr_iscdi=true |