Node-Level Graph Regression With Deep Gaussian Process Models
In this article, we study node-level graph regression, which aims to predict an output vector for each node on a given graph. This task has a broad range of applications, including spatiotemporal forecasting and computational biology. We propose a model called deep Gaussian processes over graphs (DG...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on artificial intelligence 2024-06, Vol.5 (6), p.3257-3269 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this article, we study node-level graph regression, which aims to predict an output vector for each node on a given graph. This task has a broad range of applications, including spatiotemporal forecasting and computational biology. We propose a model called deep Gaussian processes over graphs (DGPGs), which is composed of hierarchical Gaussian processes (GPs) and learns the mapping between input-output signals in graph domains. DGPG possesses several distinctive advantages, such as the ability in capturing uncertainty, effectiveness on small datasets, and requiring fewer efforts for selecting model architectures and hyperparameters. It is also more favorable than traditional GP models in terms of expressiveness and scalability, due to the hierarchical deep structure and the variational inference framework. Moreover, we generalize DGPG to a more challenging setting where the graph structure is time-varying. Our theoretical analysis shows that the graph information can improve convergence by reducing sampling variances when optimizing the evidence lower bound, and the challenge of time-varying graph structure can be addressed by a time-weighted sampling scheme. The performance of DGPG is demonstrated through extensive experiments in various synthetic and real-world datasets. Some appealing characteristics of DGPG are further discussed, such as its ability to capture prediction uncertainty and learn graph structures. |
---|---|
ISSN: | 2691-4581 2691-4581 |
DOI: | 10.1109/TAI.2023.3347177 |