Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face Recognition
Deep learning models have shown their vulnerability when dealing with adversarial attacks. Existing attacks almost perform on low-level instances, such as pixels and super-pixels, and rarely exploit semantic clues. For face recognition attacks, existing methods typically generate the l_p-norm pertur...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning models have shown their vulnerability when dealing with
adversarial attacks. Existing attacks almost perform on low-level instances,
such as pixels and super-pixels, and rarely exploit semantic clues. For face
recognition attacks, existing methods typically generate the l_p-norm
perturbations on pixels, however, resulting in low attack transferability and
high vulnerability to denoising defense models. In this work, instead of
performing perturbations on the low-level pixels, we propose to generate
attacks through perturbing on the high-level semantics to improve attack
transferability. Specifically, a unified flexible framework, Adversarial
Attributes (Adv-Attribute), is designed to generate inconspicuous and
transferable attacks on face recognition, which crafts the adversarial noise
and adds it into different attributes based on the guidance of the difference
in face recognition features from the target. Moreover, the importance-aware
attribute selection and the multi-objective optimization strategy are
introduced to further ensure the balance of stealthiness and attacking
strength. Extensive experiments on the FFHQ and CelebA-HQ datasets show that
the proposed Adv-Attribute method achieves the state-of-the-art attacking
success rates while maintaining better visual effects against recent attack
methods. |
---|---|
DOI: | 10.48550/arxiv.2210.06871 |