GOAt: Explaining Graph Neural Networks via Graph Output Attribution
Understanding the decision-making process of Graph Neural Networks (GNNs) is crucial to their interpretability. Most existing methods for explaining GNNs typically rely on training auxiliary models, resulting in the explanations remain black-boxed. This paper introduces Graph Output Attribution (GOA...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-01 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Lu, Shengyao Mills, Keith G Jiao He Liu, Bang Niu, Di |
description | Understanding the decision-making process of Graph Neural Networks (GNNs) is crucial to their interpretability. Most existing methods for explaining GNNs typically rely on training auxiliary models, resulting in the explanations remain black-boxed. This paper introduces Graph Output Attribution (GOAt), a novel method to attribute graph outputs to input graph features, creating GNN explanations that are faithful, discriminative, as well as stable across similar samples. By expanding the GNN as a sum of scalar products involving node features, edge features and activation patterns, we propose an efficient analytical method to compute contribution of each node or edge feature to each scalar product and aggregate the contributions from all scalar products in the expansion form to derive the importance of each node and edge. Through extensive experiments on synthetic and real-world data, we show that our method not only outperforms various state-ofthe-art GNN explainers in terms of the commonly used fidelity metric, but also exhibits stronger discriminability, and stability by a remarkable margin. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2919764344</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2919764344</sourcerecordid><originalsourceid>FETCH-proquest_journals_29197643443</originalsourceid><addsrcrecordid>eNqNysEKgjAcgPERBEn5DoPOwtymZjcRs1NeusuEVTPZ1vZf9fh58AE6fYfft0IRZSxNDpzSDYq9HwkhNC9olrEI1W1XwRE3XzsJpZW-49YJ-8AXGZyY5sDHuKfHbyUW6QLYALgCcGoIoIzeofVNTF7GS7dof2qu9TmxzryC9NCPJjg9U0_LtCxyzjhn_10_rB85zA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2919764344</pqid></control><display><type>article</type><title>GOAt: Explaining Graph Neural Networks via Graph Output Attribution</title><source>Free E- Journals</source><creator>Lu, Shengyao ; Mills, Keith G ; Jiao He ; Liu, Bang ; Niu, Di</creator><creatorcontrib>Lu, Shengyao ; Mills, Keith G ; Jiao He ; Liu, Bang ; Niu, Di</creatorcontrib><description>Understanding the decision-making process of Graph Neural Networks (GNNs) is crucial to their interpretability. Most existing methods for explaining GNNs typically rely on training auxiliary models, resulting in the explanations remain black-boxed. This paper introduces Graph Output Attribution (GOAt), a novel method to attribute graph outputs to input graph features, creating GNN explanations that are faithful, discriminative, as well as stable across similar samples. By expanding the GNN as a sum of scalar products involving node features, edge features and activation patterns, we propose an efficient analytical method to compute contribution of each node or edge feature to each scalar product and aggregate the contributions from all scalar products in the expansion form to derive the importance of each node and edge. Through extensive experiments on synthetic and real-world data, we show that our method not only outperforms various state-ofthe-art GNN explainers in terms of the commonly used fidelity metric, but also exhibits stronger discriminability, and stability by a remarkable margin.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Graph neural networks ; Neural networks ; Nodes</subject><ispartof>arXiv.org, 2024-01</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Lu, Shengyao</creatorcontrib><creatorcontrib>Mills, Keith G</creatorcontrib><creatorcontrib>Jiao He</creatorcontrib><creatorcontrib>Liu, Bang</creatorcontrib><creatorcontrib>Niu, Di</creatorcontrib><title>GOAt: Explaining Graph Neural Networks via Graph Output Attribution</title><title>arXiv.org</title><description>Understanding the decision-making process of Graph Neural Networks (GNNs) is crucial to their interpretability. Most existing methods for explaining GNNs typically rely on training auxiliary models, resulting in the explanations remain black-boxed. This paper introduces Graph Output Attribution (GOAt), a novel method to attribute graph outputs to input graph features, creating GNN explanations that are faithful, discriminative, as well as stable across similar samples. By expanding the GNN as a sum of scalar products involving node features, edge features and activation patterns, we propose an efficient analytical method to compute contribution of each node or edge feature to each scalar product and aggregate the contributions from all scalar products in the expansion form to derive the importance of each node and edge. Through extensive experiments on synthetic and real-world data, we show that our method not only outperforms various state-ofthe-art GNN explainers in terms of the commonly used fidelity metric, but also exhibits stronger discriminability, and stability by a remarkable margin.</description><subject>Graph neural networks</subject><subject>Neural networks</subject><subject>Nodes</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNysEKgjAcgPERBEn5DoPOwtymZjcRs1NeusuEVTPZ1vZf9fh58AE6fYfft0IRZSxNDpzSDYq9HwkhNC9olrEI1W1XwRE3XzsJpZW-49YJ-8AXGZyY5sDHuKfHbyUW6QLYALgCcGoIoIzeofVNTF7GS7dof2qu9TmxzryC9NCPJjg9U0_LtCxyzjhn_10_rB85zA</recordid><startdate>20240126</startdate><enddate>20240126</enddate><creator>Lu, Shengyao</creator><creator>Mills, Keith G</creator><creator>Jiao He</creator><creator>Liu, Bang</creator><creator>Niu, Di</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240126</creationdate><title>GOAt: Explaining Graph Neural Networks via Graph Output Attribution</title><author>Lu, Shengyao ; Mills, Keith G ; Jiao He ; Liu, Bang ; Niu, Di</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29197643443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Graph neural networks</topic><topic>Neural networks</topic><topic>Nodes</topic><toplevel>online_resources</toplevel><creatorcontrib>Lu, Shengyao</creatorcontrib><creatorcontrib>Mills, Keith G</creatorcontrib><creatorcontrib>Jiao He</creatorcontrib><creatorcontrib>Liu, Bang</creatorcontrib><creatorcontrib>Niu, Di</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lu, Shengyao</au><au>Mills, Keith G</au><au>Jiao He</au><au>Liu, Bang</au><au>Niu, Di</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>GOAt: Explaining Graph Neural Networks via Graph Output Attribution</atitle><jtitle>arXiv.org</jtitle><date>2024-01-26</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Understanding the decision-making process of Graph Neural Networks (GNNs) is crucial to their interpretability. Most existing methods for explaining GNNs typically rely on training auxiliary models, resulting in the explanations remain black-boxed. This paper introduces Graph Output Attribution (GOAt), a novel method to attribute graph outputs to input graph features, creating GNN explanations that are faithful, discriminative, as well as stable across similar samples. By expanding the GNN as a sum of scalar products involving node features, edge features and activation patterns, we propose an efficient analytical method to compute contribution of each node or edge feature to each scalar product and aggregate the contributions from all scalar products in the expansion form to derive the importance of each node and edge. Through extensive experiments on synthetic and real-world data, we show that our method not only outperforms various state-ofthe-art GNN explainers in terms of the commonly used fidelity metric, but also exhibits stronger discriminability, and stability by a remarkable margin.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-01 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2919764344 |
source | Free E- Journals |
subjects | Graph neural networks Neural networks Nodes |
title | GOAt: Explaining Graph Neural Networks via Graph Output Attribution |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T21%3A00%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=GOAt:%20Explaining%20Graph%20Neural%20Networks%20via%20Graph%20Output%20Attribution&rft.jtitle=arXiv.org&rft.au=Lu,%20Shengyao&rft.date=2024-01-26&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2919764344%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2919764344&rft_id=info:pmid/&rfr_iscdi=true |