Graph Neural Networks Need Cluster-Normalize-Activate Modules
Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-12 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Skryagin, Arseny Divo, Felix Mohammad Amin Ali Devendra Singh Dhami Kersting, Kristian |
description | Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex tasks. To counteract this tendency, we propose a plug-and-play module consisting of three steps: Cluster-Normalize-Activate (CNA). By applying CNA modules, GNNs search and form super nodes in each layer, which are normalized and activated individually. We demonstrate in node classification and property prediction tasks that CNA significantly improves the accuracy over the state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora and CiteSeer, respectively. It further benefits GNNs in regression tasks as well, reducing the mean squared error compared to all baselines. At the same time, GNNs with CNA require substantially fewer learnable parameters than competing architectures. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3141681481</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3141681481</sourcerecordid><originalsourceid>FETCH-proquest_journals_31416814813</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwdS9KLMhQ8EstLUrMAVIl5flF2cVARmqKgnNOaXFJapGuX35RbmJOZlWqrmNySWZZYkmqgm9-SmlOajEPA2taYk5xKi-U5mZQdnMNcfbQLSjKLyxNLS6Jz8ovLcoDSsUbG5oYmlkYmlgYGhOnCgBpGzfV</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3141681481</pqid></control><display><type>article</type><title>Graph Neural Networks Need Cluster-Normalize-Activate Modules</title><source>Freely Accessible Journals</source><creator>Skryagin, Arseny ; Divo, Felix ; Mohammad Amin Ali ; Devendra Singh Dhami ; Kersting, Kristian</creator><creatorcontrib>Skryagin, Arseny ; Divo, Felix ; Mohammad Amin Ali ; Devendra Singh Dhami ; Kersting, Kristian</creatorcontrib><description>Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex tasks. To counteract this tendency, we propose a plug-and-play module consisting of three steps: Cluster-Normalize-Activate (CNA). By applying CNA modules, GNNs search and form super nodes in each layer, which are normalized and activated individually. We demonstrate in node classification and property prediction tasks that CNA significantly improves the accuracy over the state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora and CiteSeer, respectively. It further benefits GNNs in regression tasks as well, reducing the mean squared error compared to all baselines. At the same time, GNNs with CNA require substantially fewer learnable parameters than competing architectures.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Clusters ; Convergence ; Graph neural networks ; Machine learning ; Modules ; Structured data ; Task complexity</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Skryagin, Arseny</creatorcontrib><creatorcontrib>Divo, Felix</creatorcontrib><creatorcontrib>Mohammad Amin Ali</creatorcontrib><creatorcontrib>Devendra Singh Dhami</creatorcontrib><creatorcontrib>Kersting, Kristian</creatorcontrib><title>Graph Neural Networks Need Cluster-Normalize-Activate Modules</title><title>arXiv.org</title><description>Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex tasks. To counteract this tendency, we propose a plug-and-play module consisting of three steps: Cluster-Normalize-Activate (CNA). By applying CNA modules, GNNs search and form super nodes in each layer, which are normalized and activated individually. We demonstrate in node classification and property prediction tasks that CNA significantly improves the accuracy over the state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora and CiteSeer, respectively. It further benefits GNNs in regression tasks as well, reducing the mean squared error compared to all baselines. At the same time, GNNs with CNA require substantially fewer learnable parameters than competing architectures.</description><subject>Clusters</subject><subject>Convergence</subject><subject>Graph neural networks</subject><subject>Machine learning</subject><subject>Modules</subject><subject>Structured data</subject><subject>Task complexity</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwdS9KLMhQ8EstLUrMAVIl5flF2cVARmqKgnNOaXFJapGuX35RbmJOZlWqrmNySWZZYkmqgm9-SmlOajEPA2taYk5xKi-U5mZQdnMNcfbQLSjKLyxNLS6Jz8ovLcoDSsUbG5oYmlkYmlgYGhOnCgBpGzfV</recordid><startdate>20241205</startdate><enddate>20241205</enddate><creator>Skryagin, Arseny</creator><creator>Divo, Felix</creator><creator>Mohammad Amin Ali</creator><creator>Devendra Singh Dhami</creator><creator>Kersting, Kristian</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241205</creationdate><title>Graph Neural Networks Need Cluster-Normalize-Activate Modules</title><author>Skryagin, Arseny ; Divo, Felix ; Mohammad Amin Ali ; Devendra Singh Dhami ; Kersting, Kristian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31416814813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Clusters</topic><topic>Convergence</topic><topic>Graph neural networks</topic><topic>Machine learning</topic><topic>Modules</topic><topic>Structured data</topic><topic>Task complexity</topic><toplevel>online_resources</toplevel><creatorcontrib>Skryagin, Arseny</creatorcontrib><creatorcontrib>Divo, Felix</creatorcontrib><creatorcontrib>Mohammad Amin Ali</creatorcontrib><creatorcontrib>Devendra Singh Dhami</creatorcontrib><creatorcontrib>Kersting, Kristian</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Skryagin, Arseny</au><au>Divo, Felix</au><au>Mohammad Amin Ali</au><au>Devendra Singh Dhami</au><au>Kersting, Kristian</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Graph Neural Networks Need Cluster-Normalize-Activate Modules</atitle><jtitle>arXiv.org</jtitle><date>2024-12-05</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Graph Neural Networks (GNNs) are non-Euclidean deep learning models for graph-structured data. Despite their successful and diverse applications, oversmoothing prohibits deep architectures due to node features converging to a single fixed point. This severely limits their potential to solve complex tasks. To counteract this tendency, we propose a plug-and-play module consisting of three steps: Cluster-Normalize-Activate (CNA). By applying CNA modules, GNNs search and form super nodes in each layer, which are normalized and activated individually. We demonstrate in node classification and property prediction tasks that CNA significantly improves the accuracy over the state-of-the-art. Particularly, CNA reaches 94.18% and 95.75% accuracy on Cora and CiteSeer, respectively. It further benefits GNNs in regression tasks as well, reducing the mean squared error compared to all baselines. At the same time, GNNs with CNA require substantially fewer learnable parameters than competing architectures.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3141681481 |
source | Freely Accessible Journals |
subjects | Clusters Convergence Graph neural networks Machine learning Modules Structured data Task complexity |
title | Graph Neural Networks Need Cluster-Normalize-Activate Modules |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T03%3A13%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Graph%20Neural%20Networks%20Need%20Cluster-Normalize-Activate%20Modules&rft.jtitle=arXiv.org&rft.au=Skryagin,%20Arseny&rft.date=2024-12-05&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3141681481%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3141681481&rft_id=info:pmid/&rfr_iscdi=true |