On Discprecncies between Perturbation Evaluations of Graph Neural Network Attributions
Neural networks are increasingly finding their way into the realm of graphs and modeling relationships between features. Concurrently graph neural network explanation approaches are being invented to uncover relationships between the nodes of the graphs. However, there is a disparity between the exi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural networks are increasingly finding their way into the realm of graphs
and modeling relationships between features. Concurrently graph neural network
explanation approaches are being invented to uncover relationships between the
nodes of the graphs. However, there is a disparity between the existing
attribution methods, and it is unclear which attribution to trust. Therefore
research has introduced evaluation experiments that assess them from different
perspectives. In this work, we assess attribution methods from a perspective
not previously explored in the graph domain: retraining. The core idea is to
retrain the network on important (or not important) relationships as identified
by the attributions and evaluate how networks can generalize based on these
relationships. We reformulate the retraining framework to sidestep issues
lurking in the previous formulation and propose guidelines for correct
analysis. We run our analysis on four state-of-the-art GNN attribution methods
and five synthetic and real-world graph classification datasets. The analysis
reveals that attributions perform variably depending on the dataset and the
network. Most importantly, we observe that the famous GNNExplainer performs
similarly to an arbitrary designation of edge importance. The study concludes
that the retraining evaluation cannot be used as a generalized benchmark and
recommends it as a toolset to evaluate attributions on a specifically addressed
network, dataset, and sparsity. |
---|---|
DOI: | 10.48550/arxiv.2401.00633 |