Enhancing adversarial attacks with resize-invariant and logical ensemble

In black-box scenarios, most transfer-based attacks usually improve the transferability of adversarial examples by optimizing the gradient calculation of the input image. Unfortunately, since the gradient information is only calculated and optimized for each pixel point in the image individually, th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2024-05, Vol.173, p.106194-106194, Article 106194
Hauptverfasser: Shao, Yanling, Zhang, Yuzhi, Dong, Wenyong, Zhang, Qikun, Shan, Pingping, Guo, Junying, Xu, Hairui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In black-box scenarios, most transfer-based attacks usually improve the transferability of adversarial examples by optimizing the gradient calculation of the input image. Unfortunately, since the gradient information is only calculated and optimized for each pixel point in the image individually, the generated adversarial examples tend to overfit the local model and have poor transferability to the target model. To tackle the issue, we propose a resize-invariant method (RIM) and a logical ensemble transformation method (LETM) to enhance the transferability of adversarial examples. Specifically, RIM is inspired by the resize-invariant property of Deep Neural Networks (DNNs). The range of resizable pixel is first divided into multiple intervals, and then the input image is randomly resized and padded within each interval. Finally, LETM performs logical ensemble of multiple images after RIM transformation to calculate the final gradient update direction. The proposed method adequately considers the information of each pixel in the image and the surrounding pixels. The probability of duplication of image transformations is minimized and the overfitting effect of adversarial examples is effectively mitigated. Numerous experiments on the ImageNet dataset show that our approach outperforms other advanced methods and is capable of generating more transferable adversarial examples.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2024.106194