Graph Learning with Distributional Edge Layouts

Graph Neural Networks (GNNs) learn from graph-structured data by passing local messages between neighboring nodes along edges on certain topological layouts. Typically, these topological layouts in modern GNNs are deterministically computed (e.g., attention-based GNNs) or locally sampled (e.g., Grap...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhao, Xinjian, Ying, Chaolong, Yu, Tianshu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhao, Xinjian
Ying, Chaolong
Yu, Tianshu
description Graph Neural Networks (GNNs) learn from graph-structured data by passing local messages between neighboring nodes along edges on certain topological layouts. Typically, these topological layouts in modern GNNs are deterministically computed (e.g., attention-based GNNs) or locally sampled (e.g., GraphSage) under heuristic assumptions. In this paper, we for the first time pose that these layouts can be globally sampled via Langevin dynamics following Boltzmann distribution equipped with explicit physical energy, leading to higher feasibility in the physical world. We argue that such a collection of sampled/optimized layouts can capture the wide energy distribution and bring extra expressivity on top of WL-test, therefore easing downstream tasks. As such, we propose Distributional Edge Layouts (DELs) to serve as a complement to a variety of GNNs. DEL is a pre-processing strategy independent of subsequent GNN variants, thus being highly flexible. Experimental results demonstrate that DELs consistently and substantially improve a series of GNN baselines, achieving state-of-the-art performance on multiple datasets.
doi_str_mv 10.48550/arxiv.2402.16402
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_16402</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_16402</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-f8db58d6e1d7cdcab396ec26733e7c4f538564db8dc2817b0ff2b3a22a323fd53</originalsourceid><addsrcrecordid>eNotzrluwkAUheFpUiDgAaiYF7Cx7_UsKRFhiWSJht66s8FIxKCxSeDt2dKcvzv6GJuURV5pIYoZpWv8zaEqIC_lYwdstk50PvDaU2pju-d_sT_wr9j1KZpLH08tHfnS7T2v6Xa69N2IfQQ6dn783yHbrZa7xSart-vvxbzOSCrIgnZGaCd96ZR1lgx-Sm9BKkSvbBUEaiErZ7SzoEtlihDAIAEQAgYncMim79uXuDmn-EPp1jzlzUuOdxHLPhY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Graph Learning with Distributional Edge Layouts</title><source>arXiv.org</source><creator>Zhao, Xinjian ; Ying, Chaolong ; Yu, Tianshu</creator><creatorcontrib>Zhao, Xinjian ; Ying, Chaolong ; Yu, Tianshu</creatorcontrib><description>Graph Neural Networks (GNNs) learn from graph-structured data by passing local messages between neighboring nodes along edges on certain topological layouts. Typically, these topological layouts in modern GNNs are deterministically computed (e.g., attention-based GNNs) or locally sampled (e.g., GraphSage) under heuristic assumptions. In this paper, we for the first time pose that these layouts can be globally sampled via Langevin dynamics following Boltzmann distribution equipped with explicit physical energy, leading to higher feasibility in the physical world. We argue that such a collection of sampled/optimized layouts can capture the wide energy distribution and bring extra expressivity on top of WL-test, therefore easing downstream tasks. As such, we propose Distributional Edge Layouts (DELs) to serve as a complement to a variety of GNNs. DEL is a pre-processing strategy independent of subsequent GNN variants, thus being highly flexible. Experimental results demonstrate that DELs consistently and substantially improve a series of GNN baselines, achieving state-of-the-art performance on multiple datasets.</description><identifier>DOI: 10.48550/arxiv.2402.16402</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.16402$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.16402$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhao, Xinjian</creatorcontrib><creatorcontrib>Ying, Chaolong</creatorcontrib><creatorcontrib>Yu, Tianshu</creatorcontrib><title>Graph Learning with Distributional Edge Layouts</title><description>Graph Neural Networks (GNNs) learn from graph-structured data by passing local messages between neighboring nodes along edges on certain topological layouts. Typically, these topological layouts in modern GNNs are deterministically computed (e.g., attention-based GNNs) or locally sampled (e.g., GraphSage) under heuristic assumptions. In this paper, we for the first time pose that these layouts can be globally sampled via Langevin dynamics following Boltzmann distribution equipped with explicit physical energy, leading to higher feasibility in the physical world. We argue that such a collection of sampled/optimized layouts can capture the wide energy distribution and bring extra expressivity on top of WL-test, therefore easing downstream tasks. As such, we propose Distributional Edge Layouts (DELs) to serve as a complement to a variety of GNNs. DEL is a pre-processing strategy independent of subsequent GNN variants, thus being highly flexible. Experimental results demonstrate that DELs consistently and substantially improve a series of GNN baselines, achieving state-of-the-art performance on multiple datasets.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrluwkAUheFpUiDgAaiYF7Cx7_UsKRFhiWSJht66s8FIxKCxSeDt2dKcvzv6GJuURV5pIYoZpWv8zaEqIC_lYwdstk50PvDaU2pju-d_sT_wr9j1KZpLH08tHfnS7T2v6Xa69N2IfQQ6dn783yHbrZa7xSart-vvxbzOSCrIgnZGaCd96ZR1lgx-Sm9BKkSvbBUEaiErZ7SzoEtlihDAIAEQAgYncMim79uXuDmn-EPp1jzlzUuOdxHLPhY</recordid><startdate>20240226</startdate><enddate>20240226</enddate><creator>Zhao, Xinjian</creator><creator>Ying, Chaolong</creator><creator>Yu, Tianshu</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240226</creationdate><title>Graph Learning with Distributional Edge Layouts</title><author>Zhao, Xinjian ; Ying, Chaolong ; Yu, Tianshu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-f8db58d6e1d7cdcab396ec26733e7c4f538564db8dc2817b0ff2b3a22a323fd53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Xinjian</creatorcontrib><creatorcontrib>Ying, Chaolong</creatorcontrib><creatorcontrib>Yu, Tianshu</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhao, Xinjian</au><au>Ying, Chaolong</au><au>Yu, Tianshu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Graph Learning with Distributional Edge Layouts</atitle><date>2024-02-26</date><risdate>2024</risdate><abstract>Graph Neural Networks (GNNs) learn from graph-structured data by passing local messages between neighboring nodes along edges on certain topological layouts. Typically, these topological layouts in modern GNNs are deterministically computed (e.g., attention-based GNNs) or locally sampled (e.g., GraphSage) under heuristic assumptions. In this paper, we for the first time pose that these layouts can be globally sampled via Langevin dynamics following Boltzmann distribution equipped with explicit physical energy, leading to higher feasibility in the physical world. We argue that such a collection of sampled/optimized layouts can capture the wide energy distribution and bring extra expressivity on top of WL-test, therefore easing downstream tasks. As such, we propose Distributional Edge Layouts (DELs) to serve as a complement to a variety of GNNs. DEL is a pre-processing strategy independent of subsequent GNN variants, thus being highly flexible. Experimental results demonstrate that DELs consistently and substantially improve a series of GNN baselines, achieving state-of-the-art performance on multiple datasets.</abstract><doi>10.48550/arxiv.2402.16402</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.16402
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_16402
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Graph Learning with Distributional Edge Layouts
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T20%3A38%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Graph%20Learning%20with%20Distributional%20Edge%20Layouts&rft.au=Zhao,%20Xinjian&rft.date=2024-02-26&rft_id=info:doi/10.48550/arxiv.2402.16402&rft_dat=%3Carxiv_GOX%3E2402_16402%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true