MEAD: A Multi-Armed Approach for Evaluation of Adversarial Examples Detectors
Detection of adversarial examples has been a hot topic in the last years due to its importance for safely deploying machine learning algorithms in critical applications. However, the detection methods are generally validated by assuming a single implicitly known attack strategy, which does not neces...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Detection of adversarial examples has been a hot topic in the last years due
to its importance for safely deploying machine learning algorithms in critical
applications. However, the detection methods are generally validated by
assuming a single implicitly known attack strategy, which does not necessarily
account for real-life threats. Indeed, this can lead to an overoptimistic
assessment of the detectors' performance and may induce some bias in the
comparison between competing detection schemes. We propose a novel multi-armed
framework, called MEAD, for evaluating detectors based on several attack
strategies to overcome this limitation. Among them, we make use of three new
objectives to generate attacks. The proposed performance metric is based on the
worst-case scenario: detection is successful if and only if all different
attacks are correctly recognized. Empirically, we show the effectiveness of our
approach. Moreover, the poor performance obtained for state-of-the-art
detectors opens a new exciting line of research. |
---|---|
DOI: | 10.48550/arxiv.2206.15415 |