Cross-view graph contrastive learning with hypergraph
Graph contrastive learning (GCL) provides a new perspective to alleviate the reliance on labeled data for graph representation learning. Recent efforts on GCL leverage various graph augmentation strategies, i.e., node dropping and edge masking, to create augmented views of the original graph, and th...
Gespeichert in:
Veröffentlicht in: | Information fusion 2023-11, Vol.99, p.101867, Article 101867 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph contrastive learning (GCL) provides a new perspective to alleviate the reliance on labeled data for graph representation learning. Recent efforts on GCL leverage various graph augmentation strategies, i.e., node dropping and edge masking, to create augmented views of the original graph, and then contrast the representations in these augmented views to learn expressive graph embeddings. Nevertheless, the contrasting is still conducted between small variations of the original graph, where limited information can be extracted. In this work, for the first time, we propose to use hypergraph to establish a new view for graph contrastive learning. Specifically, for each graph, we construct its corresponding hypergraph, and then contrast the graph representations learned in the hypergraph view and the original graph view, by which the high-order information of a graph can be captured to produce graph representations of higher quality. Furthermore, to bridge the potential gap between the graph and hypergraph representations, we utilize the diffusion model to exchange the information contained in these two views, enabling better graph contrastive learning. We evaluate our proposal with a collection of experiments, and the empirical results validate that the proposed model can improve node and graph classification performance.
•We proposed that hypergraphs are used as a paradigm to enhance graph contrastive learning.•We propose a novel diffusion model-based fusion mechanism that aligns the positive examples.•Our experimental results all exceed existing benchmarks and achieve state-of-the-art. |
---|---|
ISSN: | 1566-2535 1872-6305 |
DOI: | 10.1016/j.inffus.2023.101867 |