SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
SentiNet is a novel detection framework for localized universal attacks on neural networks. These attacks restrict adversarial noise to contiguous portions of an image and are reusable with different images -- constraints that prove useful for generating physically-realizable attacks. Unlike most ot...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | SentiNet is a novel detection framework for localized universal attacks on
neural networks. These attacks restrict adversarial noise to contiguous
portions of an image and are reusable with different images -- constraints that
prove useful for generating physically-realizable attacks. Unlike most other
works on adversarial detection, SentiNet does not require training a model or
preknowledge of an attack prior to detection. Our approach is appealing due to
the large number of possible mechanisms and attack-vectors that an
attack-specific defense would have to consider. By leveraging the neural
network's susceptibility to attacks and by using techniques from model
interpretability and object detection as detection mechanisms, SentiNet turns a
weakness of a model into a strength. We demonstrate the effectiveness of
SentiNet on three different attacks -- i.e., data poisoning attacks, trojaned
networks, and adversarial patches (including physically realizable attacks) --
and show that our defense is able to achieve very competitive performance
metrics for all three threats. Finally, we show that SentiNet is robust against
strong adaptive adversaries, who build adversarial patches that specifically
target the components of SentiNet's architecture. |
---|---|
DOI: | 10.48550/arxiv.1812.00292 |