HGAttack: Transferable Heterogeneous Graph Adversarial Attack
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce, where resilience against adversarial attacks is crucial. However, existing adversarial attack methods, which are primarily designed for homogeneous graphs, fall short w...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for
their performance in areas like the web and e-commerce, where resilience
against adversarial attacks is crucial. However, existing adversarial attack
methods, which are primarily designed for homogeneous graphs, fall short when
applied to HGNNs due to their limited ability to address the structural and
semantic complexity of HGNNs. This paper introduces HGAttack, the first
dedicated gray box evasion attack method for heterogeneous graphs. We design a
novel surrogate model to closely resemble the behaviors of the target HGNN and
utilize gradient-based methods for perturbation generation. Specifically, the
proposed surrogate model effectively leverages heterogeneous information by
extracting meta-path induced subgraphs and applying GNNs to learn node
embeddings with distinct semantics from each subgraph. This approach improves
the transferability of generated attacks on the target HGNN and significantly
reduces memory costs. For perturbation generation, we introduce a
semantics-aware mechanism that leverages subgraph gradient information to
autonomously identify vulnerable edges across a wide range of relations within
a constrained perturbation budget. We validate HGAttack's efficacy with
comprehensive experiments on three datasets, providing empirical analyses of
its generated perturbations. Outperforming baseline methods, HGAttack
demonstrated significant efficacy in diminishing the performance of target HGNN
models, affirming the effectiveness of our approach in evaluating the
robustness of HGNNs against adversarial attacks. |
---|---|
DOI: | 10.48550/arxiv.2401.09945 |