DGHSA: derivative graph-based hypergraph structure attack
Hypergraph Neural Networks (HGNNs) have been significantly successful in higher-order tasks. However, recent study have shown that they are also vulnerable to adversarial attacks like Graph Neural Networks. Attackers fool HGNNs by modifying node links in hypergraphs. Existing adversarial attacks on...
Gespeichert in:
Veröffentlicht in: | Scientific reports 2024-12, Vol.14 (1), p.30222-15, Article 30222 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Hypergraph Neural Networks (HGNNs) have been significantly successful in higher-order tasks. However, recent study have shown that they are also vulnerable to adversarial attacks like Graph Neural Networks. Attackers fool HGNNs by modifying node links in hypergraphs. Existing adversarial attacks on HGNNs only consider feasibility in the targeted attack, and there is no discussion on the untargeted attack with higher practicality. To close this gap, we propose a derivative graph-based hypergraph attack, namely DGHSA, which focuses on reducing the global performance of HGNNs. Specifically, DGHSA consists of two models: candidate set generation and evaluation. The gradients of the incidence matrix are obtained by training HGNNs, and then the candidate set is obtained by modifying the hypergraph structure with the gradient rules. In the candidate set evaluation module, DGHSA uses the derivative graph metric to assess the impact of attacks on the similarity of candidate hypergraphs, and finally selects the candidate hypergraph with the worst node similarity as the optimal perturbation hypergraph. We have conducted extensive experiments on four commonly used datasets, and the results show that DGHSA can significantly degrade the performance of HGNNs on node classification tasks. |
---|---|
ISSN: | 2045-2322 2045-2322 |
DOI: | 10.1038/s41598-024-79824-y |