Selecting Models based on the Risk of Damage Caused by Adversarial Attacks
Regulation, legal liabilities, and societal concerns challenge the adoption of AI in safety and security-critical applications. One of the key concerns is that adversaries can cause harm by manipulating model predictions without being detected. Regulation hence demands an assessment of the risk of d...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-01 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Klemenc, Jona Trittenbach, Holger |
description | Regulation, legal liabilities, and societal concerns challenge the adoption of AI in safety and security-critical applications. One of the key concerns is that adversaries can cause harm by manipulating model predictions without being detected. Regulation hence demands an assessment of the risk of damage caused by adversaries. Yet, there is no method to translate this high-level demand into actionable metrics that quantify the risk of damage. In this article, we propose a method to model and statistically estimate the probability of damage arising from adversarial attacks. We show that our proposed estimator is statistically consistent and unbiased. In experiments, we demonstrate that the estimation results of our method have a clear and actionable interpretation and outperform conventional metrics. We then show how operators can use the estimation results to reliably select the model with the lowest risk. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2771186818</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2771186818</sourcerecordid><originalsourceid>FETCH-proquest_journals_27711868183</originalsourceid><addsrcrecordid>eNqNysEKgkAUQNEhCJLyHx60DpwxdbZiRQRtqr2M-jR1cmreGPT3FfQBre7i3AnzRBjylVwLMWM-URcEgYgTEUWhxw5n1Fi6dmjgaCrUBIUirMAM4K4Ip5Z6MDVs1E01CJkav1i8IK2eaEnZVmlInVNlTws2rZUm9H-ds-Vue8n2q7s1jxHJ5Z0Z7fChXCQJ5zKWXIb_XW9AHzvG</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2771186818</pqid></control><display><type>article</type><title>Selecting Models based on the Risk of Damage Caused by Adversarial Attacks</title><source>Free E- Journals</source><creator>Klemenc, Jona ; Trittenbach, Holger</creator><creatorcontrib>Klemenc, Jona ; Trittenbach, Holger</creatorcontrib><description>Regulation, legal liabilities, and societal concerns challenge the adoption of AI in safety and security-critical applications. One of the key concerns is that adversaries can cause harm by manipulating model predictions without being detected. Regulation hence demands an assessment of the risk of damage caused by adversaries. Yet, there is no method to translate this high-level demand into actionable metrics that quantify the risk of damage. In this article, we propose a method to model and statistically estimate the probability of damage arising from adversarial attacks. We show that our proposed estimator is statistically consistent and unbiased. In experiments, we demonstrate that the estimation results of our method have a clear and actionable interpretation and outperform conventional metrics. We then show how operators can use the estimation results to reliably select the model with the lowest risk.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Damage ; Legal liability</subject><ispartof>arXiv.org, 2023-01</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Klemenc, Jona</creatorcontrib><creatorcontrib>Trittenbach, Holger</creatorcontrib><title>Selecting Models based on the Risk of Damage Caused by Adversarial Attacks</title><title>arXiv.org</title><description>Regulation, legal liabilities, and societal concerns challenge the adoption of AI in safety and security-critical applications. One of the key concerns is that adversaries can cause harm by manipulating model predictions without being detected. Regulation hence demands an assessment of the risk of damage caused by adversaries. Yet, there is no method to translate this high-level demand into actionable metrics that quantify the risk of damage. In this article, we propose a method to model and statistically estimate the probability of damage arising from adversarial attacks. We show that our proposed estimator is statistically consistent and unbiased. In experiments, we demonstrate that the estimation results of our method have a clear and actionable interpretation and outperform conventional metrics. We then show how operators can use the estimation results to reliably select the model with the lowest risk.</description><subject>Damage</subject><subject>Legal liability</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNysEKgkAUQNEhCJLyHx60DpwxdbZiRQRtqr2M-jR1cmreGPT3FfQBre7i3AnzRBjylVwLMWM-URcEgYgTEUWhxw5n1Fi6dmjgaCrUBIUirMAM4K4Ip5Z6MDVs1E01CJkav1i8IK2eaEnZVmlInVNlTws2rZUm9H-ds-Vue8n2q7s1jxHJ5Z0Z7fChXCQJ5zKWXIb_XW9AHzvG</recordid><startdate>20230128</startdate><enddate>20230128</enddate><creator>Klemenc, Jona</creator><creator>Trittenbach, Holger</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230128</creationdate><title>Selecting Models based on the Risk of Damage Caused by Adversarial Attacks</title><author>Klemenc, Jona ; Trittenbach, Holger</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27711868183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Damage</topic><topic>Legal liability</topic><toplevel>online_resources</toplevel><creatorcontrib>Klemenc, Jona</creatorcontrib><creatorcontrib>Trittenbach, Holger</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Klemenc, Jona</au><au>Trittenbach, Holger</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Selecting Models based on the Risk of Damage Caused by Adversarial Attacks</atitle><jtitle>arXiv.org</jtitle><date>2023-01-28</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Regulation, legal liabilities, and societal concerns challenge the adoption of AI in safety and security-critical applications. One of the key concerns is that adversaries can cause harm by manipulating model predictions without being detected. Regulation hence demands an assessment of the risk of damage caused by adversaries. Yet, there is no method to translate this high-level demand into actionable metrics that quantify the risk of damage. In this article, we propose a method to model and statistically estimate the probability of damage arising from adversarial attacks. We show that our proposed estimator is statistically consistent and unbiased. In experiments, we demonstrate that the estimation results of our method have a clear and actionable interpretation and outperform conventional metrics. We then show how operators can use the estimation results to reliably select the model with the lowest risk.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-01 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2771186818 |
source | Free E- Journals |
subjects | Damage Legal liability |
title | Selecting Models based on the Risk of Damage Caused by Adversarial Attacks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T18%3A24%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Selecting%20Models%20based%20on%20the%20Risk%20of%20Damage%20Caused%20by%20Adversarial%20Attacks&rft.jtitle=arXiv.org&rft.au=Klemenc,%20Jona&rft.date=2023-01-28&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2771186818%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2771186818&rft_id=info:pmid/&rfr_iscdi=true |