GLOW: Global Layout Aware Attacks on Object Detection
Adversarial attacks aim to perturb images such that a predictor outputs incorrect results. Due to the limited research in structured attacks, imposing consistency checks on natural multi-object scenes is a promising yet practical defense against conventional adversarial attacks. More desired attacks...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adversarial attacks aim to perturb images such that a predictor outputs
incorrect results. Due to the limited research in structured attacks, imposing
consistency checks on natural multi-object scenes is a promising yet practical
defense against conventional adversarial attacks. More desired attacks, to this
end, should be able to fool defenses with such consistency checks. Therefore,
we present the first approach GLOW that copes with various attack requests by
generating global layout-aware adversarial attacks, in which both categorical
and geometric layout constraints are explicitly established. Specifically, we
focus on object detection task and given a victim image, GLOW first localizes
victim objects according to target labels. And then it generates multiple
attack plans, together with their context-consistency scores. Our proposed
GLOW, on the one hand, is capable of handling various types of requests,
including single or multiple victim objects, with or without specified victim
objects. On the other hand, it produces a consistency score for each attack
plan, reflecting the overall contextual consistency that both semantic category
and global scene layout are considered. In experiment, we design multiple types
of attack requests and validate our ideas on MS COCO and Pascal. Extensive
experimental results demonstrate that we can achieve about 30$\%$ average
relative improvement compared to state-of-the-art methods in conventional
single object attack request; Moreover, our method outperforms SOTAs
significantly on more generic attack requests by about 20$\%$ in average;
Finally, our method produces superior performance under challenging zero-query
black-box setting, or 20$\%$ better than SOTAs. Our code, model and attack
requests would be made available. |
---|---|
DOI: | 10.48550/arxiv.2302.14166 |