AutoMTNAS: Automated meta-reinforcement learning on graph tokenization for graph neural architecture search

Graph neural networks have achieved breakthroughs in various fields due to their powerful automated representation capabilities for graph. Designing effective graph neural architectures is critical for feature representation and property prediction in non-Euclidean graph-structured data. However, th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Knowledge-based systems 2025-02, Vol.310, p.113023, Article 113023
Hauptverfasser: Nie, Mingshuo, Chen, Dongming, Chen, Huilin, Wang, Dongqi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Graph neural networks have achieved breakthroughs in various fields due to their powerful automated representation capabilities for graph. Designing effective graph neural architectures is critical for feature representation and property prediction in non-Euclidean graph-structured data. However, this design process heavily relies on the strong prior knowledge and experience of researchers. The inherent complexity and irregularity in graph-structured data make it challenging for existing methods to develop strategies for capturing expressive representations beyond traditional paradigms, resulting in unaffordable computational cost and precision loss across diverse graphs. To this end, we propose a novel automated meta-reinforcement learning on graph tokenization for graph neural architecture search, named AutoMTNAS, to learn a more general and reliable architecture search policy. In particular, our graph tokenization method identifies critical nodes and structural patterns within the graph and captures label-aware global information to summarize potential valuable insights. We define a meta-reinforcement learning searcher that utilizes parameter sharing and policy gradients to discover optimal architectures for new tasks, even with limited available observations. Extensive experiments on benchmark datasets, ranging from small to large, demonstrate that AutoMTNAS outperforms human-invented architectures and existing graph neural architecture search methods. •Identify structural information and capture label-aware global information.•Search for the optimal architectures with a meta-reinforcement learning searcher.•Validate the effectiveness for AutoMTNAS with extensive experiments.
ISSN:0950-7051
DOI:10.1016/j.knosys.2025.113023