PP-GNN: Pretraining Position-aware Graph Neural Networks with the NP-hard metric dimension problem
On a graph G=(V,E), we call S⊂V resolving if ∀u,v∈V with u≠v, ∃w∈V such that d(u,w)≠d(v,w). The smallest possible cardinality of S is called the metric dimension, computing which is known to be NP-hard. Solving the metric dimension problem (MDP) and the associated minimal resolving set has many impo...
Gespeichert in:
Veröffentlicht in: | Neurocomputing (Amsterdam) 2023-12, Vol.561, p.126848, Article 126848 |
---|---|
1. Verfasser: | |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | On a graph G=(V,E), we call S⊂V resolving if ∀u,v∈V with u≠v, ∃w∈V such that d(u,w)≠d(v,w). The smallest possible cardinality of S is called the metric dimension, computing which is known to be NP-hard. Solving the metric dimension problem (MDP) and the associated minimal resolving set has many important applications across science and engineering. In this paper, we introduce MaskGNN, a method using a graph neural network (GNN) model to learn the minimal resolving set in a self-supervised manner by optimizing a novel surrogate objective. We provide a construction showing the global minimum of this objective coincides with the solution to the MDP. MaskGNN attains 51%–72% improvement over the best baseline and up to 98% the reward of integer programming in 0.72% of the running time. On this foundation, we introduce Pretraining Position-aware GNNs (PP-GNN) and evaluate on popular benchmark position-based tasks on graphs. PP-GNN’s strong results challenge the currently popular paradigm – heuristic-driven anchor-selection – with a new learning-based paradigm — simultaneously learning the metric basis of the graph and pretraining position-aware representations for transferring to downstream position-based tasks.
•Formulation of a self-supervised surrogate objective for learning an NP-hard problem•Construction showing the surrogate loss’s minimum coincides with the optimal solution•Extensive evaluation and significant improvements over meaningful baselines•Competitive performance to integer programming which is over a hundred times slower•Single-instance pretraining learns transferrable position-aware representations |
---|---|
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2023.126848 |