GHNN: Graph Harmonic Neural Networks for semi-supervised graph-level classification

Graph classification aims to predict the property of the whole graph, which has attracted growing attention in the graph learning community. This problem has been extensively studied in the literature of both graph convolutional networks and graph kernels. Graph convolutional networks can learn effe...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2022-07, Vol.151, p.70-79
Hauptverfasser: Ju, Wei, Luo, Xiao, Ma, Zeyu, Yang, Junwei, Deng, Minghua, Zhang, Ming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Graph classification aims to predict the property of the whole graph, which has attracted growing attention in the graph learning community. This problem has been extensively studied in the literature of both graph convolutional networks and graph kernels. Graph convolutional networks can learn effective node representations via message passing to mine graph topology in an implicit way, whereas graph kernels can explicitly utilize graph structural knowledge for classification. Due to the scarcity of labeled data in real-world applications, semi-supervised algorithms are anticipated for this problem. In this paper, we propose Graph Harmonic Neural Network (GHNN) which combines the advantages of both worlds to sufficiently leverage the unlabeled data, and thus overcomes label scarcity in semi-supervised scenarios. Specifically, our GHNN consists of a graph convolutional network (GCN) module and a graph kernel network (GKN) module that explore graph topology information from complementary perspectives. To fully leverage the unlabeled data, we develop a novel harmonic contrastive loss and a harmonic consistency loss to harmonize the training of two modules by giving priority to high-quality unlabeled data, thereby reconciling prediction consistency between both of them. In this manner, the two modules mutually enhance each other to sufficiently explore the graph topology of both labeled and unlabeled data. Extensive experiments on a variety of benchmarks demonstrate the effectiveness of our approach over competitive baselines.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2022.03.018