Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation

This paper investigates the effects of backdoor attacks on graph neural networks (GNNs) trained through simple data augmentation by modifying the edges of the graph in graph classification. The numerical results show that GNNs trained with data augmentation remain vulnerable to backdoor attacks and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences Communications and Computer Sciences, 2024/03/01, Vol.E107.A(3), pp.355-358
Hauptverfasser: YASHIKI, Shingo, TAKAHASHI, Chako, SUZUKI, Koutarou
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper investigates the effects of backdoor attacks on graph neural networks (GNNs) trained through simple data augmentation by modifying the edges of the graph in graph classification. The numerical results show that GNNs trained with data augmentation remain vulnerable to backdoor attacks and may even be more vulnerable to such attacks than GNNs without data augmentation.
ISSN:0916-8508
1745-1337
DOI:10.1587/transfun.2023CIL0007