Generating Real-time Explanations for GNNs via Multiple Specialty Learners and Online Knowledge Distillation

Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions, especially when making critical decisions. However, explaining GNNs is challenging due to the complexity of graph data and model execution. Post-hoc...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2023-01, Vol.11, p.1-1
Hauptverfasser: Bui, Tien-Cuong, Le, Van-Duc, Li, Wen-Syan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions, especially when making critical decisions. However, explaining GNNs is challenging due to the complexity of graph data and model execution. Post-hoc explanation approaches have gained popularity due to their versatility, despite their additional computational costs. Although intrinsically interpretable models can provide instant explanations, they are usually model-specific and can only explain particular GNNs. To address these challenges, we propose a novel, general, and fast GNN explanation framework named SCALE. SCALE trains multiple specialty learners to explain GNNs, as creating a single powerful explainer for examining the attributions of interactions in input graphs is complicated. In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm. During the explanation phase, explanations of predictions are generated by multiple explainers corresponding to trained learners. Edge masking and random walk with restart procedures are used to provide structural explanations for graph-level and node-level predictions, respectively. A feature attribution module provides overall summaries and instance-level feature contributions. We compare SCALE with state-of-the-art baselines through quantitative and qualitative experiments to demonstrate its explanation correctness and execution performance. Furthermore, we conduct a user study and a series of ablation studies to understand the strengths and weaknesses of the proposed framework.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3270385