Unbiased and augmentation-free self-supervised graph representation learning
Graph Contrastive Learning (GCL) is a promising self-supervised method for learning node representations that combines graph convolutional networks (GCN) and contrastive learning. However, existing GCL methods heavily rely on graph structure data and augmentation schemes to learn invariant represent...
Gespeichert in:
Veröffentlicht in: | Pattern recognition 2024-05, Vol.149, p.110274, Article 110274 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph Contrastive Learning (GCL) is a promising self-supervised method for learning node representations that combines graph convolutional networks (GCN) and contrastive learning. However, existing GCL methods heavily rely on graph structure data and augmentation schemes to learn invariant representations between different augmentation views. This can be problematic as the performance of GCNs may deteriorate when noisy connections are present in the original graph structure. Additionally, there is limited knowledge on how to significantly augment graphs without altering their labels. To address these issues, we propose a novel method called Unbiased and Augmentation-Free Self-Supervised Graph Contrastive Learning (USAF-GCL). We design graph learners and post-processing schemes to improve the structure of the original graph. Instead of using augmentation schemes, we generate contrastive views using global and local semantics. To ensure consistency between embedding similarity and original feature similarity, we introduce pseudo-homology to maximize the mutual information between predicted and true labels. Furthermore, we theoretically demonstrate that pseudo-homology maximization can enhance the upper bound of mutual information between predicted and true labels. USAF-GCL offers several advantages over existing GCL methods. Firstly, it uses an unbiased graph structure to reduce the impact of noise on model performance. Secondly, it saves computational resources by eliminating complex data expansion. Lastly, it integrates structural information, neighborhood information, and the consistency of embeddings and features in graph representation learning, effectively improving model performance. Extensive experiments on eight benchmark datasets confirm the remarkable effectiveness and efficiency of USAF-GCL.
•USAF-GCL introduces pseudo-homology as a supervisory signal.•We demonstrate that maximizing pseudo-homology improves the mutual information.•The proposed method outperforms the SOTA methods and requires less time. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2024.110274 |