Towards collaborative fair federated distillation

Federated Learning (FL), despite its success as a privacy-preserving distributed machine learning framework, faces significant bottlenecks, including high communication costs, heterogeneity issues, and unfairness, throughout various phases of the training process. Federated Distillation (FD) has rec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Engineering applications of artificial intelligence 2024-11, Vol.137, p.109216, Article 109216
Hauptverfasser: Noor, Faiza Anan, Tabassum, Nawrin, Hussain, Tahmid, Rafi, Taki Hasan, Chae, Dong-Kyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated Learning (FL), despite its success as a privacy-preserving distributed machine learning framework, faces significant bottlenecks, including high communication costs, heterogeneity issues, and unfairness, throughout various phases of the training process. Federated Distillation (FD) has recently emerged as a promising solution to tackle heterogeneity and enhance communication efficiency in FL. In addition, significant effort has been put forth in recent years to support various notions of fairness associated with the FL ecosystem, such as Collaborative Fairness, which seeks to ensure the fair distribution of rewards among participants based on their level of contribution. Although several works have been done to promote collaborative fairness in FL, they are mostly well-suited for FL algorithms based on model updates or gradient sharing during the training procedure. Guaranteeing collaborative fairness in FD methods is still completely unexplored where it can have potential applications in communication engineering, healthcare, banking, finance, and social networks in large-scale software, etc., as most Knowledge Distillation (KD) based FL algorithms promote either identical global logits or identical global model updates sharing among the clients after the distillation process. This is unfair because severely underperforming participants can gain access to the knowledge of all high-performing participants while contributing almost nothing to the learning process. In this paper, we propose a novel Collaborative Fair Federated Distillation (CFD) algorithm with a view to exploring collaborative fairness in KD-based Federated Learning strategies. We leverage the reputation mechanism to rank the participants in order of their contributions and appropriately distribute logits among them while maintaining competitive performance. Extensive experiments on benchmark datasets validate the efficacy of our proposed method as well as the practicality of the proposed logit-based reward scheme.
ISSN:0952-1976
DOI:10.1016/j.engappai.2024.109216