Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection
Although deep neural networks have shown promising performances on various tasks, even achieving human-level performance on some, they are shown to be susceptible to incorrect predictions even with imperceptibly small perturbations to an input. There exists a large number of previous works which pro...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although deep neural networks have shown promising performances on various
tasks, even achieving human-level performance on some, they are shown to be
susceptible to incorrect predictions even with imperceptibly small
perturbations to an input. There exists a large number of previous works which
proposed to defend against such adversarial attacks either by robust inference
or detection of adversarial inputs. Yet, most of them cannot effectively defend
against whitebox attacks where an adversary has a knowledge of the model and
defense. More importantly, they do not provide a convincing reason why the
generated adversarial inputs successfully fool the target models. To address
these shortcomings of the existing approaches, we hypothesize that the
adversarial inputs are tied to latent features that are susceptible to
adversarial perturbation, which we call vulnerable features. Then based on this
intuition, we propose a minimax game formulation to disentangle the latent
features of each instance into robust and vulnerable ones, using variational
autoencoders with two latent spaces. We thoroughly validate our model for both
blackbox and whitebox attacks on MNIST, Fashion MNIST5, and Cat & Dog datasets,
whose results show that the adversarial inputs cannot bypass our detector
without changing its semantics, in which case the attack has failed. |
---|---|
DOI: | 10.48550/arxiv.1909.04311 |