RELIANT: Fair Knowledge Distillation for Graph Neural Networks
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with sc...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Graph Neural Networks (GNNs) have shown satisfying performance on various
graph learning tasks. To achieve better fitting capability, most GNNs are with
a large number of parameters, which makes these GNNs computationally expensive.
Therefore, it is difficult to deploy them onto edge devices with scarce
computational resources, e.g., mobile phones and wearable smart devices.
Knowledge Distillation (KD) is a common solution to compress GNNs, where a
light-weighted model (i.e., the student model) is encouraged to mimic the
behavior of a computationally expensive GNN (i.e., the teacher GNN model).
Nevertheless, most existing GNN-based KD methods lack fairness consideration.
As a consequence, the student model usually inherits and even exaggerates the
bias from the teacher GNN. To handle such a problem, we take initial steps
towards fair knowledge distillation for GNNs. Specifically, we first formulate
a novel problem of fair knowledge distillation for GNN-based teacher-student
frameworks. Then we propose a principled framework named RELIANT to mitigate
the bias exhibited by the student model. Notably, the design of RELIANT is
decoupled from any specific teacher and student model structures, and thus can
be easily adapted to various GNN-based KD frameworks. We perform extensive
experiments on multiple real-world datasets, which corroborates that RELIANT
achieves less biased GNN knowledge distillation while maintaining high
prediction utility. |
---|---|
DOI: | 10.48550/arxiv.2301.01150 |