A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks

Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. H...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:JIPS(Journal of Information Processing Systems) 2023-12, Vol.19 (6), p.803-816
Hauptverfasser: Chaehyeon Kim, Hyewon Ryu, Ki Yong Lee
Format: Artikel
Sprache:kor
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 816
container_issue 6
container_start_page 803
container_title JIPS(Journal of Information Processing Systems)
container_volume 19
creator Chaehyeon Kim
Hyewon Ryu
Ki Yong Lee
description Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.
format Article
fullrecord <record><control><sourceid>kiss_kisti</sourceid><recordid>TN_cdi_kisti_ndsl_JAKO202305761718427</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><kiss_id>4068040</kiss_id><sourcerecordid>4068040</sourcerecordid><originalsourceid>FETCH-LOGICAL-k507-b13ed0a59b7afd614737550af88dbc9183a05fb8b819e9eb68523513a0b2df093</originalsourceid><addsrcrecordid>eNo9jj1PwzAYhC0EElHpL2DxwhjptR3H9liiUj5KuxSpE5Fd29RqiKM4fP17UhUxnXT33OnOUEZB0VwC356jjChR5oqw7SWaphQMgOC8pIRn6HWGF722wbVDfquTs3j-3TW61UOILX52wz5a7GOPV9E6XDV67PuwO8UvKbRvx363x1VsP2PzcfR1g1du-Ir9IV2hC6-b5KZ_OkGbu_mmus-X68VDNVvmBw4iN4Q5C5orI7S3JSkEGw-C9lJas1NEMg3cG2kkUU45U0pOGSeja6j1oNgE3ZxmDyENoW5taurH2dOaAmXARUkEkQUVI3f9z6W668O77n_qAkoJBbBfah9ZpQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks</title><source>EZB-FREE-00999 freely available EZB journals</source><creator>Chaehyeon Kim ; Hyewon Ryu ; Ki Yong Lee</creator><creatorcontrib>Chaehyeon Kim ; Hyewon Ryu ; Ki Yong Lee</creatorcontrib><description>Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.</description><identifier>ISSN: 1976-913X</identifier><identifier>EISSN: 2092-805X</identifier><language>kor</language><publisher>한국정보처리학회</publisher><subject>Explainable Artificial Intelligence ; Gradient-based Explanation ; Graph Convolutional Network</subject><ispartof>JIPS(Journal of Information Processing Systems), 2023-12, Vol.19 (6), p.803-816</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,881</link.rule.ids></links><search><creatorcontrib>Chaehyeon Kim</creatorcontrib><creatorcontrib>Hyewon Ryu</creatorcontrib><creatorcontrib>Ki Yong Lee</creatorcontrib><title>A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks</title><title>JIPS(Journal of Information Processing Systems)</title><addtitle>JIPS(Journal of Information Processing Systems)</addtitle><description>Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.</description><subject>Explainable Artificial Intelligence</subject><subject>Gradient-based Explanation</subject><subject>Graph Convolutional Network</subject><issn>1976-913X</issn><issn>2092-805X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>JDI</sourceid><recordid>eNo9jj1PwzAYhC0EElHpL2DxwhjptR3H9liiUj5KuxSpE5Fd29RqiKM4fP17UhUxnXT33OnOUEZB0VwC356jjChR5oqw7SWaphQMgOC8pIRn6HWGF722wbVDfquTs3j-3TW61UOILX52wz5a7GOPV9E6XDV67PuwO8UvKbRvx363x1VsP2PzcfR1g1du-Ir9IV2hC6-b5KZ_OkGbu_mmus-X68VDNVvmBw4iN4Q5C5orI7S3JSkEGw-C9lJas1NEMg3cG2kkUU45U0pOGSeja6j1oNgE3ZxmDyENoW5taurH2dOaAmXARUkEkQUVI3f9z6W668O77n_qAkoJBbBfah9ZpQ</recordid><startdate>20231231</startdate><enddate>20231231</enddate><creator>Chaehyeon Kim</creator><creator>Hyewon Ryu</creator><creator>Ki Yong Lee</creator><general>한국정보처리학회</general><scope>HZB</scope><scope>Q5X</scope><scope>JDI</scope></search><sort><creationdate>20231231</creationdate><title>A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks</title><author>Chaehyeon Kim ; Hyewon Ryu ; Ki Yong Lee</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-k507-b13ed0a59b7afd614737550af88dbc9183a05fb8b819e9eb68523513a0b2df093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>kor</language><creationdate>2023</creationdate><topic>Explainable Artificial Intelligence</topic><topic>Gradient-based Explanation</topic><topic>Graph Convolutional Network</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chaehyeon Kim</creatorcontrib><creatorcontrib>Hyewon Ryu</creatorcontrib><creatorcontrib>Ki Yong Lee</creatorcontrib><collection>Korean Studies Information Service System (KISS)</collection><collection>Korean Studies Information Service System (KISS) B-Type</collection><collection>KoreaScience</collection><jtitle>JIPS(Journal of Information Processing Systems)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chaehyeon Kim</au><au>Hyewon Ryu</au><au>Ki Yong Lee</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks</atitle><jtitle>JIPS(Journal of Information Processing Systems)</jtitle><addtitle>JIPS(Journal of Information Processing Systems)</addtitle><date>2023-12-31</date><risdate>2023</risdate><volume>19</volume><issue>6</issue><spage>803</spage><epage>816</epage><pages>803-816</pages><issn>1976-913X</issn><eissn>2092-805X</eissn><abstract>Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.</abstract><pub>한국정보처리학회</pub><tpages>14</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1976-913X
ispartof JIPS(Journal of Information Processing Systems), 2023-12, Vol.19 (6), p.803-816
issn 1976-913X
2092-805X
language kor
recordid cdi_kisti_ndsl_JAKO202305761718427
source EZB-FREE-00999 freely available EZB journals
subjects Explainable Artificial Intelligence
Gradient-based Explanation
Graph Convolutional Network
title A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T11%3A26%3A26IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-kiss_kisti&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Gradient-Based%20Explanation%20Method%20for%20Node%20Classification%20Using%20Graph%20Convolutional%20Networks&rft.jtitle=JIPS(Journal%20of%20Information%20Processing%20Systems)&rft.au=Chaehyeon%20Kim&rft.date=2023-12-31&rft.volume=19&rft.issue=6&rft.spage=803&rft.epage=816&rft.pages=803-816&rft.issn=1976-913X&rft.eissn=2092-805X&rft_id=info:doi/&rft_dat=%3Ckiss_kisti%3E4068040%3C/kiss_kisti%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_kiss_id=4068040&rfr_iscdi=true