Dual-channel graph contrastive learning for self-supervised graph-level representation learning

•DualGCL achieves fine-grained node information aggregation and augmentation-free contrastive learning.•An adaptive hierarchical aggregation process is proposed for node soft-clustering and weight-based local information incorporation.•A dual-channel contrastive learning process is designed to effic...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Pattern recognition 2023-07, Vol.139, p.109448, Article 109448
Hauptverfasser: Luo, Zhenfei, Dong, Yixiang, Zheng, Qinghua, Liu, Huan, Luo, Minnan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•DualGCL achieves fine-grained node information aggregation and augmentation-free contrastive learning.•An adaptive hierarchical aggregation process is proposed for node soft-clustering and weight-based local information incorporation.•A dual-channel contrastive learning process is designed to efficiently learn graph-level discriminative representations. Self-supervised graph-level representation learning aims to learn discriminative representations for subgraphs or entire graphs without human-curated labels. Recently, graph contrastive learning (GCL) methods have revolutionized this field and achieved state-of-the-art results in various downstream tasks. Nonetheless, current GCL models are mostly based on simple node-level information aggregation operations and fail to reveal various substructures from input graphs. Moreover, to perform graph-graph contrastive training, they often involve well-designed graph augmentation, which is expensive and requires extensive expert efforts. Here, we propose a novel GCL framework, namely DualGCL, for self-supervised graph-level representation learning. For fine-grained local information incorporation, we first present an adaptive hierarchical aggregation process with a differentiable Transformer-based aggregator. Then, to efficiently learn graph-level discriminative representations, we introduce a dual-channel contrastive learning process in a multi-granularity and augmentation-free contrasting mode. When tested empirically on six popular graph classification benchmarks, our DualGCL achieves better or comparable performance than various strong baselines.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.109448