Robust Attacks against Multiple Classifiers
We address the challenge of designing optimal adversarial noise algorithms for settings where a learner has access to multiple classifiers. We demonstrate how this problem can be framed as finding strategies at equilibrium in a two-player, zero-sum game between a learner and an adversary. In doing s...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We address the challenge of designing optimal adversarial noise algorithms
for settings where a learner has access to multiple classifiers. We demonstrate
how this problem can be framed as finding strategies at equilibrium in a
two-player, zero-sum game between a learner and an adversary. In doing so, we
illustrate the need for randomization in adversarial attacks. In order to
compute Nash equilibrium, our main technical focus is on the design of best
response oracles that can then be implemented within a Multiplicative Weights
Update framework to boost deterministic perturbations against a set of models
into optimal mixed strategies. We demonstrate the practical effectiveness of
our approach on a series of image classification tasks using both linear
classifiers and deep neural networks. |
---|---|
DOI: | 10.48550/arxiv.1906.02816 |