Towards characterizing the value of edge embeddings in Graph Neural Networks

Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Rohatgi, Dhruv, Marwah, Tanya, Lipton, Zachary Chase, Lu, Jianfeng, Moitra, Ankur, Risteski, Andrej
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Rohatgi, Dhruv
Marwah, Tanya
Lipton, Zachary Chase
Lu, Jianfeng
Moitra, Ankur
Risteski, Andrej
description Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider the benefits of architectures that maintain and update edge embeddings. On the theoretical front, under a suitable computational abstraction for a layer in the model, as well as memory constraints on the embeddings, we show that there are natural tasks on graphical models for which architectures leveraging edge embeddings can be much shallower. Our techniques are inspired by results on time-space tradeoffs in theoretical computer science. Empirically, we show architectures that maintain edge embeddings almost always improve on their node-based counterparts -- frequently significantly so in topologies that have ``hub'' nodes.
doi_str_mv 10.48550/arxiv.2410.09867
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_09867</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_09867</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_098673</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGFhamJlzMviE5JcnFqUUKyRnJBYlJpekFmVWZealK5RkpCqUJeaUpirkpymkpqSnKqTmJqWmpADlihUy8xTcixILMhT8UkuLEnOAVEl5flF2MQ8Da1piTnEqL5TmZpB3cw1x9tAF2xtfUJSZm1hUGQ-yPx5svzFhFQAJMjtI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards characterizing the value of edge embeddings in Graph Neural Networks</title><source>arXiv.org</source><creator>Rohatgi, Dhruv ; Marwah, Tanya ; Lipton, Zachary Chase ; Lu, Jianfeng ; Moitra, Ankur ; Risteski, Andrej</creator><creatorcontrib>Rohatgi, Dhruv ; Marwah, Tanya ; Lipton, Zachary Chase ; Lu, Jianfeng ; Moitra, Ankur ; Risteski, Andrej</creatorcontrib><description>Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider the benefits of architectures that maintain and update edge embeddings. On the theoretical front, under a suitable computational abstraction for a layer in the model, as well as memory constraints on the embeddings, we show that there are natural tasks on graphical models for which architectures leveraging edge embeddings can be much shallower. Our techniques are inspired by results on time-space tradeoffs in theoretical computer science. Empirically, we show architectures that maintain edge embeddings almost always improve on their node-based counterparts -- frequently significantly so in topologies that have ``hub'' nodes.</description><identifier>DOI: 10.48550/arxiv.2410.09867</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2024-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.09867$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.09867$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Rohatgi, Dhruv</creatorcontrib><creatorcontrib>Marwah, Tanya</creatorcontrib><creatorcontrib>Lipton, Zachary Chase</creatorcontrib><creatorcontrib>Lu, Jianfeng</creatorcontrib><creatorcontrib>Moitra, Ankur</creatorcontrib><creatorcontrib>Risteski, Andrej</creatorcontrib><title>Towards characterizing the value of edge embeddings in Graph Neural Networks</title><description>Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider the benefits of architectures that maintain and update edge embeddings. On the theoretical front, under a suitable computational abstraction for a layer in the model, as well as memory constraints on the embeddings, we show that there are natural tasks on graphical models for which architectures leveraging edge embeddings can be much shallower. Our techniques are inspired by results on time-space tradeoffs in theoretical computer science. Empirically, we show architectures that maintain edge embeddings almost always improve on their node-based counterparts -- frequently significantly so in topologies that have ``hub'' nodes.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGFhamJlzMviE5JcnFqUUKyRnJBYlJpekFmVWZealK5RkpCqUJeaUpirkpymkpqSnKqTmJqWmpADlihUy8xTcixILMhT8UkuLEnOAVEl5flF2MQ8Da1piTnEqL5TmZpB3cw1x9tAF2xtfUJSZm1hUGQ-yPx5svzFhFQAJMjtI</recordid><startdate>20241013</startdate><enddate>20241013</enddate><creator>Rohatgi, Dhruv</creator><creator>Marwah, Tanya</creator><creator>Lipton, Zachary Chase</creator><creator>Lu, Jianfeng</creator><creator>Moitra, Ankur</creator><creator>Risteski, Andrej</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241013</creationdate><title>Towards characterizing the value of edge embeddings in Graph Neural Networks</title><author>Rohatgi, Dhruv ; Marwah, Tanya ; Lipton, Zachary Chase ; Lu, Jianfeng ; Moitra, Ankur ; Risteski, Andrej</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_098673</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Rohatgi, Dhruv</creatorcontrib><creatorcontrib>Marwah, Tanya</creatorcontrib><creatorcontrib>Lipton, Zachary Chase</creatorcontrib><creatorcontrib>Lu, Jianfeng</creatorcontrib><creatorcontrib>Moitra, Ankur</creatorcontrib><creatorcontrib>Risteski, Andrej</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Rohatgi, Dhruv</au><au>Marwah, Tanya</au><au>Lipton, Zachary Chase</au><au>Lu, Jianfeng</au><au>Moitra, Ankur</au><au>Risteski, Andrej</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards characterizing the value of edge embeddings in Graph Neural Networks</atitle><date>2024-10-13</date><risdate>2024</risdate><abstract>Graph neural networks (GNNs) are the dominant approach to solving machine learning problems defined over graphs. Despite much theoretical and empirical work in recent years, our understanding of finer-grained aspects of architectural design for GNNs remains impoverished. In this paper, we consider the benefits of architectures that maintain and update edge embeddings. On the theoretical front, under a suitable computational abstraction for a layer in the model, as well as memory constraints on the embeddings, we show that there are natural tasks on graphical models for which architectures leveraging edge embeddings can be much shallower. Our techniques are inspired by results on time-space tradeoffs in theoretical computer science. Empirically, we show architectures that maintain edge embeddings almost always improve on their node-based counterparts -- frequently significantly so in topologies that have ``hub'' nodes.</abstract><doi>10.48550/arxiv.2410.09867</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.09867
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_09867
source arXiv.org
subjects Computer Science - Learning
title Towards characterizing the value of edge embeddings in Graph Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-11T02%3A00%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20characterizing%20the%20value%20of%20edge%20embeddings%20in%20Graph%20Neural%20Networks&rft.au=Rohatgi,%20Dhruv&rft.date=2024-10-13&rft_id=info:doi/10.48550/arxiv.2410.09867&rft_dat=%3Carxiv_GOX%3E2410_09867%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true