Balanced Federated Semisupervised Learning With Fairness-Aware Pseudo-Labeling

Federated semisupervised learning (FSSL) aims to train models with both labeled and unlabeled data in the federated settings, enabling performance improvement and easier deployment in realistic scenarios. However, the nonindependently identical distributed data in clients leads to imbalanced model t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems 2024-07, Vol.35 (7), p.9395-9407
Hauptverfasser: Wei, Xiao-Xiang, Huang, Hua
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated semisupervised learning (FSSL) aims to train models with both labeled and unlabeled data in the federated settings, enabling performance improvement and easier deployment in realistic scenarios. However, the nonindependently identical distributed data in clients leads to imbalanced model training due to the unfair learning effects on different classes. As a result, the federated model exhibits inconsistent performance on not only different classes, but also different clients. This article presents a balanced FSSL method with the fairness-aware pseudo-labeling (FAPL) strategy to tackle the fairness issue. Specifically, this strategy globally balances the total number of unlabeled data samples which is capable to participate in model training. Then, the global numerical restrictions are further decomposed into personalized local restrictions for each client to assist the local pseudo-labeling. Consequently, this method derives a more fair federated model for all clients and gains better performance. Experiments on image classification datasets demonstrate the superiority of the proposed method over the state-of-the-art FSSL methods.
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2022.3233093