CFOA: Exploring transferable adversarial examples by content feature optimization with attention guidance
Deep neural networks are known to be highly susceptible to adversarial examples, created by adding small and carefully-designed perturbations on original images. In black-box scenarios, attacking neural networks is challenging since the information about models is not available. Generating transfera...
Gespeichert in:
Veröffentlicht in: | Computers & security 2024-07, Vol.142, p.103882, Article 103882 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks are known to be highly susceptible to adversarial examples, created by adding small and carefully-designed perturbations on original images. In black-box scenarios, attacking neural networks is challenging since the information about models is not available. Generating transferable adversarial examples on surrogate models is a viable way. However, current mainstream transfer-based attacks seldomly consider the relationship between transferability and content features of the image. To effectively enhance the attack success rate at the feature level, we newly propose an attack method called Content Feature Optimization Attack (CFOA). This novel CFOA approach operates on white-box models, aiming to generate transferable adversarial examples by targeting the content feature representations of images in the feature space. The method involves the extraction and parameterization of content features, followed by the iterative generation of feature-level perturbations guided by attention loss and adversarial loss. Ultimately, the adversarial examples generated by CFOA exhibit strong transferability on other black-box models. Experimental results show that our method produces adversarial examples with higher success rates in transferability compared to state-of-the-art methods, and importantly, they can effectively evade certain defense mechanisms, presenting a challenge for adversarial defense. |
---|---|
ISSN: | 0167-4048 1872-6208 |
DOI: | 10.1016/j.cose.2024.103882 |