Hardware-Sensitive Fairness in Heterogeneous Federated Learning

Federated Learning (FL) is a promising technique for decentralized privacy-preserving Machine Learning (ML) with a diverse pool of participating devices with varying device capabilities. However, existing approaches to handle such heterogeneous environments do not consider “fairness” in model aggreg...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on modeling and performance evaluation of computing systems 2024-11
Hauptverfasser: Talukder, Zahidur, Lu, Bingqian, Ren, Shaolei, Islam, Mohammad Atiqul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated Learning (FL) is a promising technique for decentralized privacy-preserving Machine Learning (ML) with a diverse pool of participating devices with varying device capabilities. However, existing approaches to handle such heterogeneous environments do not consider “fairness” in model aggregation, resulting in significant performance variation among devices. Meanwhile, prior works on FL fairness remain hardware-oblivious and cannot be applied directly without severe performance penalties. To address this issue, we propose a novel hardware-sensitive FL method called \(\mathsf {FairHetero} \) that promotes fairness among heterogeneous federated clients. Our approach offers tunable fairness within a group of devices with the same ML architecture as well as across different groups with heterogeneous models. Our evaluation under MNIST, FEMNIST, CIFAR10, and SHAKESPEARE datasets demonstrates that \(\mathsf {FairHetero} \) can reduce variance among participating clients’ test loss compared to the existing state-of-the-art (SOTA) techniques, resulting in increased overall performance.
ISSN:2376-3639
2376-3647
DOI:10.1145/3703627