Explaining Dynamic Graph Neural Networks via Relevance Back-propagation

Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data. However, the black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications. While recent years witness...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Xie, Jiaxuan, Liu, Yezi, Shen, Yanning
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Xie, Jiaxuan
Liu, Yezi
Shen, Yanning
description Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data. However, the black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications. While recent years witness the prosperity of the studies on explaining GNNs, most of them focus on static graphs, leaving the explanation of dynamic GNNs nearly unexplored. It is challenging to explain dynamic GNNs, due to their unique characteristic of time-varying graph structures. Directly using existing models designed for static graphs on dynamic graphs is not feasible because they ignore temporal dependencies among the snapshots. In this work, we propose DGExplainer to provide reliable explanation on dynamic GNNs. DGExplainer redistributes the output activation score of a dynamic GNN to the relevances of the neurons of its previous layer, which iterates until the relevance scores of the input neuron are obtained. We conduct quantitative and qualitative experiments on real-world datasets to demonstrate the effectiveness of the proposed framework for identifying important nodes for link prediction and node regression for dynamic GNNs.
doi_str_mv 10.48550/arxiv.2207.11175
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2207_11175</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2207_11175</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-69ffbcf160ac44049e2793da5663557c91d6b2ae76ae2ab817a449b3112907e13</originalsourceid><addsrcrecordid>eNotz81OAjEUQOFuXBj0AVzZF5iht790CYijCYHEsJ_cKR1oGDpNwRHeXkVXZ3eSj5AnYKWcKMXGmC9hKDlnpgQAo-5JtbikDkMMcUdfrhGPwdEqY9rTlf_M2P3k_NXnw4kOAemH7_yA0Xk6Q3coUu4T7vAc-vhA7lrsTv7xvyOyeV1s5m_Fcl29z6fLArVRhbZt27gWNEMnJZPWc2PFFpXWQinjLGx1w9EbjZ5jMwGDUtpGAHDLjAcxIs9_25ukTjkcMV_rX1F9E4lvPrRF5g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Explaining Dynamic Graph Neural Networks via Relevance Back-propagation</title><source>arXiv.org</source><creator>Xie, Jiaxuan ; Liu, Yezi ; Shen, Yanning</creator><creatorcontrib>Xie, Jiaxuan ; Liu, Yezi ; Shen, Yanning</creatorcontrib><description>Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data. However, the black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications. While recent years witness the prosperity of the studies on explaining GNNs, most of them focus on static graphs, leaving the explanation of dynamic GNNs nearly unexplored. It is challenging to explain dynamic GNNs, due to their unique characteristic of time-varying graph structures. Directly using existing models designed for static graphs on dynamic graphs is not feasible because they ignore temporal dependencies among the snapshots. In this work, we propose DGExplainer to provide reliable explanation on dynamic GNNs. DGExplainer redistributes the output activation score of a dynamic GNN to the relevances of the neurons of its previous layer, which iterates until the relevance scores of the input neuron are obtained. We conduct quantitative and qualitative experiments on real-world datasets to demonstrate the effectiveness of the proposed framework for identifying important nodes for link prediction and node regression for dynamic GNNs.</description><identifier>DOI: 10.48550/arxiv.2207.11175</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2022-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2207.11175$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2207.11175$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Xie, Jiaxuan</creatorcontrib><creatorcontrib>Liu, Yezi</creatorcontrib><creatorcontrib>Shen, Yanning</creatorcontrib><title>Explaining Dynamic Graph Neural Networks via Relevance Back-propagation</title><description>Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data. However, the black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications. While recent years witness the prosperity of the studies on explaining GNNs, most of them focus on static graphs, leaving the explanation of dynamic GNNs nearly unexplored. It is challenging to explain dynamic GNNs, due to their unique characteristic of time-varying graph structures. Directly using existing models designed for static graphs on dynamic graphs is not feasible because they ignore temporal dependencies among the snapshots. In this work, we propose DGExplainer to provide reliable explanation on dynamic GNNs. DGExplainer redistributes the output activation score of a dynamic GNN to the relevances of the neurons of its previous layer, which iterates until the relevance scores of the input neuron are obtained. We conduct quantitative and qualitative experiments on real-world datasets to demonstrate the effectiveness of the proposed framework for identifying important nodes for link prediction and node regression for dynamic GNNs.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81OAjEUQOFuXBj0AVzZF5iht790CYijCYHEsJ_cKR1oGDpNwRHeXkVXZ3eSj5AnYKWcKMXGmC9hKDlnpgQAo-5JtbikDkMMcUdfrhGPwdEqY9rTlf_M2P3k_NXnw4kOAemH7_yA0Xk6Q3coUu4T7vAc-vhA7lrsTv7xvyOyeV1s5m_Fcl29z6fLArVRhbZt27gWNEMnJZPWc2PFFpXWQinjLGx1w9EbjZ5jMwGDUtpGAHDLjAcxIs9_25ukTjkcMV_rX1F9E4lvPrRF5g</recordid><startdate>20220722</startdate><enddate>20220722</enddate><creator>Xie, Jiaxuan</creator><creator>Liu, Yezi</creator><creator>Shen, Yanning</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20220722</creationdate><title>Explaining Dynamic Graph Neural Networks via Relevance Back-propagation</title><author>Xie, Jiaxuan ; Liu, Yezi ; Shen, Yanning</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-69ffbcf160ac44049e2793da5663557c91d6b2ae76ae2ab817a449b3112907e13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Xie, Jiaxuan</creatorcontrib><creatorcontrib>Liu, Yezi</creatorcontrib><creatorcontrib>Shen, Yanning</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xie, Jiaxuan</au><au>Liu, Yezi</au><au>Shen, Yanning</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Explaining Dynamic Graph Neural Networks via Relevance Back-propagation</atitle><date>2022-07-22</date><risdate>2022</risdate><abstract>Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data. However, the black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications. While recent years witness the prosperity of the studies on explaining GNNs, most of them focus on static graphs, leaving the explanation of dynamic GNNs nearly unexplored. It is challenging to explain dynamic GNNs, due to their unique characteristic of time-varying graph structures. Directly using existing models designed for static graphs on dynamic graphs is not feasible because they ignore temporal dependencies among the snapshots. In this work, we propose DGExplainer to provide reliable explanation on dynamic GNNs. DGExplainer redistributes the output activation score of a dynamic GNN to the relevances of the neurons of its previous layer, which iterates until the relevance scores of the input neuron are obtained. We conduct quantitative and qualitative experiments on real-world datasets to demonstrate the effectiveness of the proposed framework for identifying important nodes for link prediction and node regression for dynamic GNNs.</abstract><doi>10.48550/arxiv.2207.11175</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2207.11175
ispartof
issn
language eng
recordid cdi_arxiv_primary_2207_11175
source arXiv.org
subjects Computer Science - Learning
title Explaining Dynamic Graph Neural Networks via Relevance Back-propagation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T15%3A55%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Explaining%20Dynamic%20Graph%20Neural%20Networks%20via%20Relevance%20Back-propagation&rft.au=Xie,%20Jiaxuan&rft.date=2022-07-22&rft_id=info:doi/10.48550/arxiv.2207.11175&rft_dat=%3Carxiv_GOX%3E2207_11175%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true