Expressivity and Generalization: Fragment-Biases for Molecular GNNs
Although recent advances in higher-order Graph Neural Networks (GNNs) improve the theoretical expressiveness and molecular property predictive performance, they often fall short of the empirical performance of models that explicitly use fragment information as inductive bias. However, for these appr...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-07 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Wollschläger, Tom Kemper, Niklas Hetzel, Leon Sommer, Johanna Günnemann, Stephan |
description | Although recent advances in higher-order Graph Neural Networks (GNNs) improve the theoretical expressiveness and molecular property predictive performance, they often fall short of the empirical performance of models that explicitly use fragment information as inductive bias. However, for these approaches, there exists no theoretic expressivity study. In this work, we propose the Fragment-WL test, an extension to the well-known Weisfeiler & Leman (WL) test, which enables the theoretic analysis of these fragment-biased GNNs. Building on the insights gained from the Fragment-WL test, we develop a new GNN architecture and a fragmentation with infinite vocabulary that significantly boosts expressiveness. We show the effectiveness of our model on synthetic and real-world data where we outperform all GNNs on Peptides and have 12% lower error than all GNNs on ZINC and 34% lower error than other fragment-biased models. Furthermore, we show that our model exhibits superior generalization capabilities compared to the latest transformer-based architectures, positioning it as a robust solution for a range of molecular modeling tasks. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3067542651</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3067542651</sourcerecordid><originalsourceid>FETCH-proquest_journals_30675426513</originalsourceid><addsrcrecordid>eNqNykELgjAYgOERBEn5Hwadhbk5jY6J2iVP3WXYZ0zWZvtmVL--Dv2ATu_heRck4kKkyS7jfEVixJExxvOCSykiUlbPyQOifujwospeaAMWvDL6rYJ2dk9rr643sCE5aIWAdHCenpyBfjbK06ZtcUOWgzII8a9rsq2rc3lMJu_uM2DoRjd7-6VOsLyQGc9lKv67PgkAOhc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3067542651</pqid></control><display><type>article</type><title>Expressivity and Generalization: Fragment-Biases for Molecular GNNs</title><source>Free E- Journals</source><creator>Wollschläger, Tom ; Kemper, Niklas ; Hetzel, Leon ; Sommer, Johanna ; Günnemann, Stephan</creator><creatorcontrib>Wollschläger, Tom ; Kemper, Niklas ; Hetzel, Leon ; Sommer, Johanna ; Günnemann, Stephan</creatorcontrib><description>Although recent advances in higher-order Graph Neural Networks (GNNs) improve the theoretical expressiveness and molecular property predictive performance, they often fall short of the empirical performance of models that explicitly use fragment information as inductive bias. However, for these approaches, there exists no theoretic expressivity study. In this work, we propose the Fragment-WL test, an extension to the well-known Weisfeiler & Leman (WL) test, which enables the theoretic analysis of these fragment-biased GNNs. Building on the insights gained from the Fragment-WL test, we develop a new GNN architecture and a fragmentation with infinite vocabulary that significantly boosts expressiveness. We show the effectiveness of our model on synthetic and real-world data where we outperform all GNNs on Peptides and have 12% lower error than all GNNs on ZINC and 34% lower error than other fragment-biased models. Furthermore, we show that our model exhibits superior generalization capabilities compared to the latest transformer-based architectures, positioning it as a robust solution for a range of molecular modeling tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bias ; Graph neural networks ; Molecular properties ; Peptides ; Performance prediction</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Wollschläger, Tom</creatorcontrib><creatorcontrib>Kemper, Niklas</creatorcontrib><creatorcontrib>Hetzel, Leon</creatorcontrib><creatorcontrib>Sommer, Johanna</creatorcontrib><creatorcontrib>Günnemann, Stephan</creatorcontrib><title>Expressivity and Generalization: Fragment-Biases for Molecular GNNs</title><title>arXiv.org</title><description>Although recent advances in higher-order Graph Neural Networks (GNNs) improve the theoretical expressiveness and molecular property predictive performance, they often fall short of the empirical performance of models that explicitly use fragment information as inductive bias. However, for these approaches, there exists no theoretic expressivity study. In this work, we propose the Fragment-WL test, an extension to the well-known Weisfeiler & Leman (WL) test, which enables the theoretic analysis of these fragment-biased GNNs. Building on the insights gained from the Fragment-WL test, we develop a new GNN architecture and a fragmentation with infinite vocabulary that significantly boosts expressiveness. We show the effectiveness of our model on synthetic and real-world data where we outperform all GNNs on Peptides and have 12% lower error than all GNNs on ZINC and 34% lower error than other fragment-biased models. Furthermore, we show that our model exhibits superior generalization capabilities compared to the latest transformer-based architectures, positioning it as a robust solution for a range of molecular modeling tasks.</description><subject>Bias</subject><subject>Graph neural networks</subject><subject>Molecular properties</subject><subject>Peptides</subject><subject>Performance prediction</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNykELgjAYgOERBEn5Hwadhbk5jY6J2iVP3WXYZ0zWZvtmVL--Dv2ATu_heRck4kKkyS7jfEVixJExxvOCSykiUlbPyQOifujwospeaAMWvDL6rYJ2dk9rr643sCE5aIWAdHCenpyBfjbK06ZtcUOWgzII8a9rsq2rc3lMJu_uM2DoRjd7-6VOsLyQGc9lKv67PgkAOhc</recordid><startdate>20240725</startdate><enddate>20240725</enddate><creator>Wollschläger, Tom</creator><creator>Kemper, Niklas</creator><creator>Hetzel, Leon</creator><creator>Sommer, Johanna</creator><creator>Günnemann, Stephan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240725</creationdate><title>Expressivity and Generalization: Fragment-Biases for Molecular GNNs</title><author>Wollschläger, Tom ; Kemper, Niklas ; Hetzel, Leon ; Sommer, Johanna ; Günnemann, Stephan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30675426513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Bias</topic><topic>Graph neural networks</topic><topic>Molecular properties</topic><topic>Peptides</topic><topic>Performance prediction</topic><toplevel>online_resources</toplevel><creatorcontrib>Wollschläger, Tom</creatorcontrib><creatorcontrib>Kemper, Niklas</creatorcontrib><creatorcontrib>Hetzel, Leon</creatorcontrib><creatorcontrib>Sommer, Johanna</creatorcontrib><creatorcontrib>Günnemann, Stephan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wollschläger, Tom</au><au>Kemper, Niklas</au><au>Hetzel, Leon</au><au>Sommer, Johanna</au><au>Günnemann, Stephan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Expressivity and Generalization: Fragment-Biases for Molecular GNNs</atitle><jtitle>arXiv.org</jtitle><date>2024-07-25</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Although recent advances in higher-order Graph Neural Networks (GNNs) improve the theoretical expressiveness and molecular property predictive performance, they often fall short of the empirical performance of models that explicitly use fragment information as inductive bias. However, for these approaches, there exists no theoretic expressivity study. In this work, we propose the Fragment-WL test, an extension to the well-known Weisfeiler & Leman (WL) test, which enables the theoretic analysis of these fragment-biased GNNs. Building on the insights gained from the Fragment-WL test, we develop a new GNN architecture and a fragmentation with infinite vocabulary that significantly boosts expressiveness. We show the effectiveness of our model on synthetic and real-world data where we outperform all GNNs on Peptides and have 12% lower error than all GNNs on ZINC and 34% lower error than other fragment-biased models. Furthermore, we show that our model exhibits superior generalization capabilities compared to the latest transformer-based architectures, positioning it as a robust solution for a range of molecular modeling tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-07 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3067542651 |
source | Free E- Journals |
subjects | Bias Graph neural networks Molecular properties Peptides Performance prediction |
title | Expressivity and Generalization: Fragment-Biases for Molecular GNNs |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T23%3A51%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Expressivity%20and%20Generalization:%20Fragment-Biases%20for%20Molecular%20GNNs&rft.jtitle=arXiv.org&rft.au=Wollschl%C3%A4ger,%20Tom&rft.date=2024-07-25&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3067542651%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3067542651&rft_id=info:pmid/&rfr_iscdi=true |