A Novel Composite Graph Neural Network

Graph neural networks (GNNs) have achieved great success in many fields due to their powerful capabilities of processing graph-structured data. However, most GNNs can only be applied to scenarios where graphs are known, but real-world data are often noisy or even do not have available graph structur...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-10, Vol.35 (10), p.13411-13425
Hauptverfasser: Liu, Zhaogeng, Yang, Jielong, Zhong, Xionghu, Wang, Wenwu, Chen, Hechang, Chang, Yi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Graph neural networks (GNNs) have achieved great success in many fields due to their powerful capabilities of processing graph-structured data. However, most GNNs can only be applied to scenarios where graphs are known, but real-world data are often noisy or even do not have available graph structures. Recently, graph learning has attracted increasing attention in dealing with these problems. In this article, we develop a novel approach to improving the robustness of the GNNs, called composite GNN. Different from existing methods, our method uses composite graphs (C-graphs) to characterize both sample and feature relations. The C-graph is a unified graph that unifies these two kinds of relations, where edges between samples represent sample similarities, and each sample has a tree-based feature graph to model feature importance and combination preference. By jointly learning multiaspect C-graphs and neural network parameters, our method improves the performance of semisupervised node classification and ensures robustness. We conduct a series of experiments to evaluate the performance of our method and the variants of our method that only learn sample relations or feature relations. Extensive experimental results on nine benchmark datasets demonstrate that our proposed method achieves the best performance on almost all the datasets and is robust to feature noises.
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2023.3268766