Segment (Almost) Nothing: Prompt-Agnostic Adversarial Attacks on Segmentation Models
General purpose segmentation models are able to generate (semantic) segmentation masks from a variety of prompts, including visual (points, boxed, etc.) and textual (object names) ones. In particular, input images are pre-processed by an image encoder to obtain embedding vectors which are later used...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | General purpose segmentation models are able to generate (semantic)
segmentation masks from a variety of prompts, including visual (points, boxed,
etc.) and textual (object names) ones. In particular, input images are
pre-processed by an image encoder to obtain embedding vectors which are later
used for mask predictions. Existing adversarial attacks target the end-to-end
tasks, i.e. aim at altering the segmentation mask predicted for a specific
image-prompt pair. However, this requires running an individual attack for each
new prompt for the same image. We propose instead to generate prompt-agnostic
adversarial attacks by maximizing the $\ell_2$-distance, in the latent space,
between the embedding of the original and perturbed images. Since the encoding
process only depends on the image, distorted image representations will cause
perturbations in the segmentation masks for a variety of prompts. We show that
even imperceptible $\ell_\infty$-bounded perturbations of radius
$\epsilon=1/255$ are often sufficient to drastically modify the masks predicted
with point, box and text prompts by recently proposed foundation models for
segmentation. Moreover, we explore the possibility of creating universal, i.e.
non image-specific, attacks which can be readily applied to any input without
further computational cost. |
---|---|
DOI: | 10.48550/arxiv.2311.14450 |