SoK: Analyzing Adversarial Examples: A Framework to Study Adversary Knowledge
Adversarial examples are malicious inputs to machine learning models that trigger a misclassification. This type of attack has been studied for close to a decade, and we find that there is a lack of study and formalization of adversary knowledge when mounting attacks. This has yielded a complex spac...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adversarial examples are malicious inputs to machine learning models that
trigger a misclassification. This type of attack has been studied for close to
a decade, and we find that there is a lack of study and formalization of
adversary knowledge when mounting attacks. This has yielded a complex space of
attack research with hard-to-compare threat models and attacks. We focus on the
image classification domain and provide a theoretical framework to study
adversary knowledge inspired by work in order theory. We present an adversarial
example game, inspired by cryptographic games, to standardize attacks. We
survey recent attacks in the image classification domain and classify their
adversary's knowledge in our framework. From this systematization, we compile
results that both confirm existing beliefs about adversary knowledge, such as
the potency of information about the attacked model as well as allow us to
derive new conclusions on the difficulty associated with the white-box and
transferable threat models, for example, that transferable attacks might not be
as difficult as previously thought. |
---|---|
DOI: | 10.48550/arxiv.2402.14937 |