Similarity Preserving Adversarial Graph Contrastive Learning
Recent works demonstrate that GNN models are vulnerable to adversarial attacks, which refer to imperceptible perturbation on the graph structure and node features. Among various GNN models, graph contrastive learning (GCL) based methods specifically suffer from adversarial attacks due to their inher...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent works demonstrate that GNN models are vulnerable to adversarial
attacks, which refer to imperceptible perturbation on the graph structure and
node features. Among various GNN models, graph contrastive learning (GCL) based
methods specifically suffer from adversarial attacks due to their inherent
design that highly depends on the self-supervision signals derived from the
original graph, which however already contains noise when the graph is
attacked. To achieve adversarial robustness against such attacks, existing
methods adopt adversarial training (AT) to the GCL framework, which considers
the attacked graph as an augmentation under the GCL framework. However, we find
that existing adversarially trained GCL methods achieve robustness at the
expense of not being able to preserve the node feature similarity. In this
paper, we propose a similarity-preserving adversarial graph contrastive
learning (SP-AGCL) framework that contrasts the clean graph with two auxiliary
views of different properties (i.e., the node similarity-preserving view and
the adversarial view). Extensive experiments demonstrate that SP-AGCL achieves
a competitive performance on several downstream tasks, and shows its
effectiveness in various scenarios, e.g., a network with adversarial attacks,
noisy labels, and heterophilous neighbors. Our code is available at
https://github.com/yeonjun-in/torch-SP-AGCL. |
---|---|
DOI: | 10.48550/arxiv.2306.13854 |