FairGP: A Scalable and Fair Graph Transformer Using Graph Partitioning
Recent studies have highlighted significant fairness issues in Graph Transformer (GT) models, particularly against subgroups defined by sensitive features. Additionally, GTs are computationally intensive and memory-demanding, limiting their application to large-scale graphs. Our experiments demonstr...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent studies have highlighted significant fairness issues in Graph
Transformer (GT) models, particularly against subgroups defined by sensitive
features. Additionally, GTs are computationally intensive and memory-demanding,
limiting their application to large-scale graphs. Our experiments demonstrate
that graph partitioning can enhance the fairness of GT models while reducing
computational complexity. To understand this improvement, we conducted a
theoretical investigation into the root causes of fairness issues in GT models.
We found that the sensitive features of higher-order nodes disproportionately
influence lower-order nodes, resulting in sensitive feature bias. We propose
Fairness-aware scalable GT based on Graph Partitioning (FairGP), which
partitions the graph to minimize the negative impact of higher-order nodes. By
optimizing attention mechanisms, FairGP mitigates the bias introduced by global
attention, thereby enhancing fairness. Extensive empirical evaluations on six
real-world datasets validate the superior performance of FairGP in achieving
fairness compared to state-of-the-art methods. The codes are available at
https://github.com/LuoRenqiang/FairGP. |
---|---|
DOI: | 10.48550/arxiv.2412.10669 |