Mental Models of Adversarial Machine Learning
Although machine learning is widely used in practice, little is known about practitioners' understanding of potential security challenges. In this work, we close this substantial gap and contribute a qualitative study focusing on developers' mental models of the machine learning pipeline a...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although machine learning is widely used in practice, little is known about
practitioners' understanding of potential security challenges. In this work, we
close this substantial gap and contribute a qualitative study focusing on
developers' mental models of the machine learning pipeline and potentially
vulnerable components. Similar studies have helped in other security fields to
discover root causes or improve risk communication. Our study reveals two
\facets of practitioners' mental models of machine learning security. Firstly,
practitioners often confuse machine learning security with threats and defences
that are not directly related to machine learning. Secondly, in contrast to
most academic research, our participants perceive security of machine learning
as not solely related to individual models, but rather in the context of entire
workflows that consist of multiple components. Jointly with our additional
findings, these two facets provide a foundation to substantiate mental models
for machine learning security and have implications for the integration of
adversarial machine learning into corporate workflows, \new{decreasing
practitioners' reported uncertainty}, and appropriate regulatory frameworks for
machine learning security. |
---|---|
DOI: | 10.48550/arxiv.2105.03726 |