AdvAttackVis: An Adversarial Attack Visualization System for Deep Neural Networks
Deep learning has been widely used in various scenarios such as image classification, natural language processing, and speech recognition. However, deep neural networks are vulnerable to adversarial attacks, resulting in incorrect predictions. Adversarial attacks involve generating adversarial examp...
Gespeichert in:
Veröffentlicht in: | International journal of advanced computer science & applications 2024-05, Vol.15 (5) |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning has been widely used in various scenarios such as image classification, natural language processing, and speech recognition. However, deep neural networks are vulnerable to adversarial attacks, resulting in incorrect predictions. Adversarial attacks involve generating adversarial examples and attacking a target model. The generation mechanism of adversarial examples and the prediction principle of the target model for adversarial examples are complicated, which makes it difficult for deep learning users to understand adversarial attacks. In this paper, we present an adversarial attack visualization system called AdvAttackVis to assist users in learning, understanding, and exploring adversarial attacks. Based on the designed interactive visualization interface, the system enables users to train and analyze adversarial attack models, understand the principles of adversarial attacks, analyze the results of attacks on the target model, and explore the prediction mechanism of the target model for adversarial examples. Through real case studies on adversarial attacks, we demonstrate the usability and effectiveness of the proposed visualization system. |
---|---|
ISSN: | 2158-107X 2156-5570 |
DOI: | 10.14569/IJACSA.2024.0150538 |