Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning

Graph Neural Networks (GNNs) are proficient in graph representation learning and achieve promising performance on versatile tasks such as node classification and link prediction. Usually, a comprehensive hyperparameter tuning is essential for fully unlocking GNN's top performance, especially fo...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lin, Lequan, Shi, Dai, Han, Andi, Wang, Zhiyong, Gao, Junbin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lin, Lequan
Shi, Dai
Han, Andi
Wang, Zhiyong
Gao, Junbin
description Graph Neural Networks (GNNs) are proficient in graph representation learning and achieve promising performance on versatile tasks such as node classification and link prediction. Usually, a comprehensive hyperparameter tuning is essential for fully unlocking GNN's top performance, especially for complicated tasks such as node classification on large graphs and long-range graphs. This is usually associated with high computational and time costs and careful design of appropriate search spaces. This work introduces a graph-conditioned latent diffusion framework (GNN-Diff) to generate high-performing GNNs based on the model checkpoints of sub-optimal hyperparameters selected by a light-tuning coarse search. We validate our method through 166 experiments across four graph tasks: node classification on small, large, and long-range graphs, as well as link prediction. Our experiments involve 10 classic and state-of-the-art target models and 20 publicly available datasets. The results consistently demonstrate that GNN-Diff: (1) boosts the performance of GNNs with efficient hyperparameter tuning; and (2) presents high stability and generalizability on unseen data across multiple generation runs. The code is available at https://github.com/lequanlin/GNN-Diff.
doi_str_mv 10.48550/arxiv.2410.05697
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_05697</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_05697</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_056973</originalsourceid><addsrcrecordid>eNqFjrsOwiAUhlkcjPoATp4XsKK23kavXezE3jCAEFsgB2rt24uNu9OX_PlvhEyXNEl3WUYXHN_6lazSKNBss98OCTtrKRuvzQOChaAEMOsOcLTWB7ghdwoK0SCvIkJr8emh1UHBXRtdRzXvnEDHkdciCATWmFg1JgPJKy8mP47I7Hphp3ze75cOYxS78vuj7H-s_zs-5qg9_A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning</title><source>arXiv.org</source><creator>Lin, Lequan ; Shi, Dai ; Han, Andi ; Wang, Zhiyong ; Gao, Junbin</creator><creatorcontrib>Lin, Lequan ; Shi, Dai ; Han, Andi ; Wang, Zhiyong ; Gao, Junbin</creatorcontrib><description>Graph Neural Networks (GNNs) are proficient in graph representation learning and achieve promising performance on versatile tasks such as node classification and link prediction. Usually, a comprehensive hyperparameter tuning is essential for fully unlocking GNN's top performance, especially for complicated tasks such as node classification on large graphs and long-range graphs. This is usually associated with high computational and time costs and careful design of appropriate search spaces. This work introduces a graph-conditioned latent diffusion framework (GNN-Diff) to generate high-performing GNNs based on the model checkpoints of sub-optimal hyperparameters selected by a light-tuning coarse search. We validate our method through 166 experiments across four graph tasks: node classification on small, large, and long-range graphs, as well as link prediction. Our experiments involve 10 classic and state-of-the-art target models and 20 publicly available datasets. The results consistently demonstrate that GNN-Diff: (1) boosts the performance of GNNs with efficient hyperparameter tuning; and (2) presents high stability and generalizability on unseen data across multiple generation runs. The code is available at https://github.com/lequanlin/GNN-Diff.</description><identifier>DOI: 10.48550/arxiv.2410.05697</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2024-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.05697$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.05697$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lin, Lequan</creatorcontrib><creatorcontrib>Shi, Dai</creatorcontrib><creatorcontrib>Han, Andi</creatorcontrib><creatorcontrib>Wang, Zhiyong</creatorcontrib><creatorcontrib>Gao, Junbin</creatorcontrib><title>Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning</title><description>Graph Neural Networks (GNNs) are proficient in graph representation learning and achieve promising performance on versatile tasks such as node classification and link prediction. Usually, a comprehensive hyperparameter tuning is essential for fully unlocking GNN's top performance, especially for complicated tasks such as node classification on large graphs and long-range graphs. This is usually associated with high computational and time costs and careful design of appropriate search spaces. This work introduces a graph-conditioned latent diffusion framework (GNN-Diff) to generate high-performing GNNs based on the model checkpoints of sub-optimal hyperparameters selected by a light-tuning coarse search. We validate our method through 166 experiments across four graph tasks: node classification on small, large, and long-range graphs, as well as link prediction. Our experiments involve 10 classic and state-of-the-art target models and 20 publicly available datasets. The results consistently demonstrate that GNN-Diff: (1) boosts the performance of GNNs with efficient hyperparameter tuning; and (2) presents high stability and generalizability on unseen data across multiple generation runs. The code is available at https://github.com/lequanlin/GNN-Diff.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrsOwiAUhlkcjPoATp4XsKK23kavXezE3jCAEFsgB2rt24uNu9OX_PlvhEyXNEl3WUYXHN_6lazSKNBss98OCTtrKRuvzQOChaAEMOsOcLTWB7ghdwoK0SCvIkJr8emh1UHBXRtdRzXvnEDHkdciCATWmFg1JgPJKy8mP47I7Hphp3ze75cOYxS78vuj7H-s_zs-5qg9_A</recordid><startdate>20241008</startdate><enddate>20241008</enddate><creator>Lin, Lequan</creator><creator>Shi, Dai</creator><creator>Han, Andi</creator><creator>Wang, Zhiyong</creator><creator>Gao, Junbin</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241008</creationdate><title>Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning</title><author>Lin, Lequan ; Shi, Dai ; Han, Andi ; Wang, Zhiyong ; Gao, Junbin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_056973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Lequan</creatorcontrib><creatorcontrib>Shi, Dai</creatorcontrib><creatorcontrib>Han, Andi</creatorcontrib><creatorcontrib>Wang, Zhiyong</creatorcontrib><creatorcontrib>Gao, Junbin</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lin, Lequan</au><au>Shi, Dai</au><au>Han, Andi</au><au>Wang, Zhiyong</au><au>Gao, Junbin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning</atitle><date>2024-10-08</date><risdate>2024</risdate><abstract>Graph Neural Networks (GNNs) are proficient in graph representation learning and achieve promising performance on versatile tasks such as node classification and link prediction. Usually, a comprehensive hyperparameter tuning is essential for fully unlocking GNN's top performance, especially for complicated tasks such as node classification on large graphs and long-range graphs. This is usually associated with high computational and time costs and careful design of appropriate search spaces. This work introduces a graph-conditioned latent diffusion framework (GNN-Diff) to generate high-performing GNNs based on the model checkpoints of sub-optimal hyperparameters selected by a light-tuning coarse search. We validate our method through 166 experiments across four graph tasks: node classification on small, large, and long-range graphs, as well as link prediction. Our experiments involve 10 classic and state-of-the-art target models and 20 publicly available datasets. The results consistently demonstrate that GNN-Diff: (1) boosts the performance of GNNs with efficient hyperparameter tuning; and (2) presents high stability and generalizability on unseen data across multiple generation runs. The code is available at https://github.com/lequanlin/GNN-Diff.</abstract><doi>10.48550/arxiv.2410.05697</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.05697
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_05697
source arXiv.org
subjects Computer Science - Learning
title Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T02%3A57%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Diffusing%20to%20the%20Top:%20Boost%20Graph%20Neural%20Networks%20with%20Minimal%20Hyperparameter%20Tuning&rft.au=Lin,%20Lequan&rft.date=2024-10-08&rft_id=info:doi/10.48550/arxiv.2410.05697&rft_dat=%3Carxiv_GOX%3E2410_05697%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true