Failure Modes in Machine Learning Systems
In the last two years, more than 200 papers have been written on how machine learning (ML) systems can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate papers covering non-adversarial failure modes. The spate of papers has made it difficu...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Kumar, Ram Shankar Siva Brien, David O Albert, Kendra Viljöen, Salomé Snover, Jeffrey |
description | In the last two years, more than 200 papers have been written on how machine
learning (ML) systems can fail because of adversarial attacks on the algorithms
and data; this number balloons if we were to incorporate papers covering
non-adversarial failure modes. The spate of papers has made it difficult for ML
practitioners, let alone engineers, lawyers, and policymakers, to keep up with
the attacks against and defenses of ML systems. However, as these systems
become more pervasive, the need to understand how they fail, whether by the
hand of an adversary or due to the inherent design of a system, will only
become more pressing. In order to equip software developers, security incident
responders, lawyers, and policy makers with a common vernacular to talk about
this problem, we developed a framework to classify failures into "Intentional
failures" where the failure is caused by an active adversary attempting to
subvert the system to attain her goals; and "Unintentional failures" where the
failure is because an ML system produces an inherently unsafe outcome. After
developing the initial version of the taxonomy last year, we worked with
security and ML teams across Microsoft, 23 external partners, standards
organization, and governments to understand how stakeholders would use our
framework. Throughout the paper, we attempt to highlight how machine learning
failure modes are meaningfully different from traditional software failures
from a technology and policy perspective. |
doi_str_mv | 10.48550/arxiv.1911.11034 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1911_11034</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1911_11034</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-27479b0bab5080c5c05b9cf9e725d10402aee55e2d3910a9175f31eb5f55b8423</originalsourceid><addsrcrecordid>eNotzj1PwzAQgGEvDKjwA5jw2iHhzvbheKwq-iGlYqB7dE4uxVIbKgdQ--8Rbad3e_Uo9YRQuooIXjif0m-JAbFEBOvu1XTBaf-TRW--Ohl1GvSG2880iK6F85CGnf44j99yGB_UXc_7UR5vnajt4m07XxX1-3I9n9UFv3pXGO98iBA5ElTQUgsUQ9sH8YY6BAeGRYjEdDYgcEBPvUWJ1BPFyhk7Uc_X7cXaHHM6cD43_-bmYrZ_trM6Qg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Failure Modes in Machine Learning Systems</title><source>arXiv.org</source><creator>Kumar, Ram Shankar Siva ; Brien, David O ; Albert, Kendra ; Viljöen, Salomé ; Snover, Jeffrey</creator><creatorcontrib>Kumar, Ram Shankar Siva ; Brien, David O ; Albert, Kendra ; Viljöen, Salomé ; Snover, Jeffrey</creatorcontrib><description>In the last two years, more than 200 papers have been written on how machine
learning (ML) systems can fail because of adversarial attacks on the algorithms
and data; this number balloons if we were to incorporate papers covering
non-adversarial failure modes. The spate of papers has made it difficult for ML
practitioners, let alone engineers, lawyers, and policymakers, to keep up with
the attacks against and defenses of ML systems. However, as these systems
become more pervasive, the need to understand how they fail, whether by the
hand of an adversary or due to the inherent design of a system, will only
become more pressing. In order to equip software developers, security incident
responders, lawyers, and policy makers with a common vernacular to talk about
this problem, we developed a framework to classify failures into "Intentional
failures" where the failure is caused by an active adversary attempting to
subvert the system to attain her goals; and "Unintentional failures" where the
failure is because an ML system produces an inherently unsafe outcome. After
developing the initial version of the taxonomy last year, we worked with
security and ML teams across Microsoft, 23 external partners, standards
organization, and governments to understand how stakeholders would use our
framework. Throughout the paper, we attempt to highlight how machine learning
failure modes are meaningfully different from traditional software failures
from a technology and policy perspective.</description><identifier>DOI: 10.48550/arxiv.1911.11034</identifier><language>eng</language><subject>Computer Science - Cryptography and Security ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,782,887</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1911.11034$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1911.11034$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kumar, Ram Shankar Siva</creatorcontrib><creatorcontrib>Brien, David O</creatorcontrib><creatorcontrib>Albert, Kendra</creatorcontrib><creatorcontrib>Viljöen, Salomé</creatorcontrib><creatorcontrib>Snover, Jeffrey</creatorcontrib><title>Failure Modes in Machine Learning Systems</title><description>In the last two years, more than 200 papers have been written on how machine
learning (ML) systems can fail because of adversarial attacks on the algorithms
and data; this number balloons if we were to incorporate papers covering
non-adversarial failure modes. The spate of papers has made it difficult for ML
practitioners, let alone engineers, lawyers, and policymakers, to keep up with
the attacks against and defenses of ML systems. However, as these systems
become more pervasive, the need to understand how they fail, whether by the
hand of an adversary or due to the inherent design of a system, will only
become more pressing. In order to equip software developers, security incident
responders, lawyers, and policy makers with a common vernacular to talk about
this problem, we developed a framework to classify failures into "Intentional
failures" where the failure is caused by an active adversary attempting to
subvert the system to attain her goals; and "Unintentional failures" where the
failure is because an ML system produces an inherently unsafe outcome. After
developing the initial version of the taxonomy last year, we worked with
security and ML teams across Microsoft, 23 external partners, standards
organization, and governments to understand how stakeholders would use our
framework. Throughout the paper, we attempt to highlight how machine learning
failure modes are meaningfully different from traditional software failures
from a technology and policy perspective.</description><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzj1PwzAQgGEvDKjwA5jw2iHhzvbheKwq-iGlYqB7dE4uxVIbKgdQ--8Rbad3e_Uo9YRQuooIXjif0m-JAbFEBOvu1XTBaf-TRW--Ohl1GvSG2880iK6F85CGnf44j99yGB_UXc_7UR5vnajt4m07XxX1-3I9n9UFv3pXGO98iBA5ElTQUgsUQ9sH8YY6BAeGRYjEdDYgcEBPvUWJ1BPFyhk7Uc_X7cXaHHM6cD43_-bmYrZ_trM6Qg</recordid><startdate>20191125</startdate><enddate>20191125</enddate><creator>Kumar, Ram Shankar Siva</creator><creator>Brien, David O</creator><creator>Albert, Kendra</creator><creator>Viljöen, Salomé</creator><creator>Snover, Jeffrey</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20191125</creationdate><title>Failure Modes in Machine Learning Systems</title><author>Kumar, Ram Shankar Siva ; Brien, David O ; Albert, Kendra ; Viljöen, Salomé ; Snover, Jeffrey</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-27479b0bab5080c5c05b9cf9e725d10402aee55e2d3910a9175f31eb5f55b8423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Kumar, Ram Shankar Siva</creatorcontrib><creatorcontrib>Brien, David O</creatorcontrib><creatorcontrib>Albert, Kendra</creatorcontrib><creatorcontrib>Viljöen, Salomé</creatorcontrib><creatorcontrib>Snover, Jeffrey</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kumar, Ram Shankar Siva</au><au>Brien, David O</au><au>Albert, Kendra</au><au>Viljöen, Salomé</au><au>Snover, Jeffrey</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Failure Modes in Machine Learning Systems</atitle><date>2019-11-25</date><risdate>2019</risdate><abstract>In the last two years, more than 200 papers have been written on how machine
learning (ML) systems can fail because of adversarial attacks on the algorithms
and data; this number balloons if we were to incorporate papers covering
non-adversarial failure modes. The spate of papers has made it difficult for ML
practitioners, let alone engineers, lawyers, and policymakers, to keep up with
the attacks against and defenses of ML systems. However, as these systems
become more pervasive, the need to understand how they fail, whether by the
hand of an adversary or due to the inherent design of a system, will only
become more pressing. In order to equip software developers, security incident
responders, lawyers, and policy makers with a common vernacular to talk about
this problem, we developed a framework to classify failures into "Intentional
failures" where the failure is caused by an active adversary attempting to
subvert the system to attain her goals; and "Unintentional failures" where the
failure is because an ML system produces an inherently unsafe outcome. After
developing the initial version of the taxonomy last year, we worked with
security and ML teams across Microsoft, 23 external partners, standards
organization, and governments to understand how stakeholders would use our
framework. Throughout the paper, we attempt to highlight how machine learning
failure modes are meaningfully different from traditional software failures
from a technology and policy perspective.</abstract><doi>10.48550/arxiv.1911.11034</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1911.11034 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1911_11034 |
source | arXiv.org |
subjects | Computer Science - Cryptography and Security Computer Science - Learning Statistics - Machine Learning |
title | Failure Modes in Machine Learning Systems |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-05T08%3A57%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Failure%20Modes%20in%20Machine%20Learning%20Systems&rft.au=Kumar,%20Ram%20Shankar%20Siva&rft.date=2019-11-25&rft_id=info:doi/10.48550/arxiv.1911.11034&rft_dat=%3Carxiv_GOX%3E1911_11034%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |