On the Feasibility of Fidelity$^-$ for Graph Pruning

As one of popular quantitative metrics to assess the quality of explanation of graph neural networks (GNNs), fidelity measures the output difference after removing unimportant parts of the input graph. Fidelity has been widely used due to its straightforward interpretation that the underlying model...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Shin, Yong-Min, Shin, Won-Yong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shin, Yong-Min
Shin, Won-Yong
description As one of popular quantitative metrics to assess the quality of explanation of graph neural networks (GNNs), fidelity measures the output difference after removing unimportant parts of the input graph. Fidelity has been widely used due to its straightforward interpretation that the underlying model should produce similar predictions when features deemed unimportant from the explanation are removed. This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?" To solve this, we aim to explore the potential of the fidelity measure to be used for graph pruning, eventually enhancing the GNN models for better efficiency. To this end, we propose Fidelity$^-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations. Our empirical observations using 7 edge attribution methods demonstrate that, surprisingly, general eXplainable AI methods outperform methods tailored to GNNs in terms of graph pruning performance.
doi_str_mv 10.48550/arxiv.2406.11504
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_11504</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_11504</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-53818e0588c11cc9b8a1e25dfe11e7a5e1f5bf9d35090ebed226054a775292643</originalsourceid><addsrcrecordid>eNotzj1vwjAUhWEvHRDlBzDVA2uCr-ObOGOFGkBCCgNzo5vkGiyFBBlawb9HfEznnY4eIaagYmMR1ZzC1f_H2qg0BkBlRsKUvbwcWBZMZ1_7zl9ucnCy8C0_evYbzaQbglwGOh3kNvz1vt9_ig9H3Zkn7x2LXfGzW6yiTblcL743EaWZiTCxYFmhtQ1A0-S1JWCNrWMAzggZHNYubxNUueKaW61ThYayDHWuU5OMxdfr9smuTsEfKdyqB7968pM7emY9nw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>On the Feasibility of Fidelity$^-$ for Graph Pruning</title><source>arXiv.org</source><creator>Shin, Yong-Min ; Shin, Won-Yong</creator><creatorcontrib>Shin, Yong-Min ; Shin, Won-Yong</creatorcontrib><description>As one of popular quantitative metrics to assess the quality of explanation of graph neural networks (GNNs), fidelity measures the output difference after removing unimportant parts of the input graph. Fidelity has been widely used due to its straightforward interpretation that the underlying model should produce similar predictions when features deemed unimportant from the explanation are removed. This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?" To solve this, we aim to explore the potential of the fidelity measure to be used for graph pruning, eventually enhancing the GNN models for better efficiency. To this end, we propose Fidelity$^-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations. Our empirical observations using 7 edge attribution methods demonstrate that, surprisingly, general eXplainable AI methods outperform methods tailored to GNNs in terms of graph pruning performance.</description><identifier>DOI: 10.48550/arxiv.2406.11504</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Information Theory ; Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing ; Computer Science - Social and Information Networks ; Mathematics - Information Theory</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.11504$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.11504$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shin, Yong-Min</creatorcontrib><creatorcontrib>Shin, Won-Yong</creatorcontrib><title>On the Feasibility of Fidelity$^-$ for Graph Pruning</title><description>As one of popular quantitative metrics to assess the quality of explanation of graph neural networks (GNNs), fidelity measures the output difference after removing unimportant parts of the input graph. Fidelity has been widely used due to its straightforward interpretation that the underlying model should produce similar predictions when features deemed unimportant from the explanation are removed. This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?" To solve this, we aim to explore the potential of the fidelity measure to be used for graph pruning, eventually enhancing the GNN models for better efficiency. To this end, we propose Fidelity$^-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations. Our empirical observations using 7 edge attribution methods demonstrate that, surprisingly, general eXplainable AI methods outperform methods tailored to GNNs in terms of graph pruning performance.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Information Theory</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><subject>Computer Science - Social and Information Networks</subject><subject>Mathematics - Information Theory</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzj1vwjAUhWEvHRDlBzDVA2uCr-ObOGOFGkBCCgNzo5vkGiyFBBlawb9HfEznnY4eIaagYmMR1ZzC1f_H2qg0BkBlRsKUvbwcWBZMZ1_7zl9ucnCy8C0_evYbzaQbglwGOh3kNvz1vt9_ig9H3Zkn7x2LXfGzW6yiTblcL743EaWZiTCxYFmhtQ1A0-S1JWCNrWMAzggZHNYubxNUueKaW61ThYayDHWuU5OMxdfr9smuTsEfKdyqB7968pM7emY9nw</recordid><startdate>20240617</startdate><enddate>20240617</enddate><creator>Shin, Yong-Min</creator><creator>Shin, Won-Yong</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20240617</creationdate><title>On the Feasibility of Fidelity$^-$ for Graph Pruning</title><author>Shin, Yong-Min ; Shin, Won-Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-53818e0588c11cc9b8a1e25dfe11e7a5e1f5bf9d35090ebed226054a775292643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Information Theory</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><topic>Computer Science - Social and Information Networks</topic><topic>Mathematics - Information Theory</topic><toplevel>online_resources</toplevel><creatorcontrib>Shin, Yong-Min</creatorcontrib><creatorcontrib>Shin, Won-Yong</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shin, Yong-Min</au><au>Shin, Won-Yong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On the Feasibility of Fidelity$^-$ for Graph Pruning</atitle><date>2024-06-17</date><risdate>2024</risdate><abstract>As one of popular quantitative metrics to assess the quality of explanation of graph neural networks (GNNs), fidelity measures the output difference after removing unimportant parts of the input graph. Fidelity has been widely used due to its straightforward interpretation that the underlying model should produce similar predictions when features deemed unimportant from the explanation are removed. This raises a natural question: "Does fidelity induce a global (soft) mask for graph pruning?" To solve this, we aim to explore the potential of the fidelity measure to be used for graph pruning, eventually enhancing the GNN models for better efficiency. To this end, we propose Fidelity$^-$-inspired Pruning (FiP), an effective framework to construct global edge masks from local explanations. Our empirical observations using 7 edge attribution methods demonstrate that, surprisingly, general eXplainable AI methods outperform methods tailored to GNNs in terms of graph pruning performance.</abstract><doi>10.48550/arxiv.2406.11504</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.11504
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_11504
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Information Theory
Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
Computer Science - Social and Information Networks
Mathematics - Information Theory
title On the Feasibility of Fidelity$^-$ for Graph Pruning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T12%3A45%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20the%20Feasibility%20of%20Fidelity$%5E-$%20for%20Graph%20Pruning&rft.au=Shin,%20Yong-Min&rft.date=2024-06-17&rft_id=info:doi/10.48550/arxiv.2406.11504&rft_dat=%3Carxiv_GOX%3E2406_11504%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true