A Statistical Defense Approach for Detecting Adversarial Examples
Adversarial examples are maliciously modified inputs created to fool deep neural networks (DNN). The discovery of such inputs presents a major issue to the expansion of DNN-based solutions. Many researchers have already contributed to the topic, providing both cutting edge-attack techniques and vari...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adversarial examples are maliciously modified inputs created to fool deep
neural networks (DNN). The discovery of such inputs presents a major issue to
the expansion of DNN-based solutions. Many researchers have already contributed
to the topic, providing both cutting edge-attack techniques and various
defensive strategies. In this work, we focus on the development of a system
capable of detecting adversarial samples by exploiting statistical information
from the training-set. Our detector computes several distorted replicas of the
test input, then collects the classifier's prediction vectors to build a
meaningful signature for the detection task. Then, the signature is projected
onto the class-specific statistic vector to infer the input's nature. The
classification output of the original input is used to select the
class-statistic vector. We show that our method reliably detects malicious
inputs, outperforming state-of-the-art approaches in various settings, while
being complementary to other defensive solutions. |
---|---|
DOI: | 10.48550/arxiv.1908.09705 |