HFSL: heterogeneity split federated learning based on client computing capabilities
With the rapid growth of the internet of things (IoT) and smart devices, edge computing has emerged as a critical technology for processing massive amounts of data and protecting user privacy. Split federated learning, an emerging distributed learning framework, enables model training without needin...
Gespeichert in:
Veröffentlicht in: | The Journal of supercomputing 2025, Vol.81 (1), Article 196 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the rapid growth of the internet of things (IoT) and smart devices, edge computing has emerged as a critical technology for processing massive amounts of data and protecting user privacy. Split federated learning, an emerging distributed learning framework, enables model training without needing data to leave local devices, effectively preventing data leakage and misuse. However, the disparity in computational capabilities of edge devices necessitates partitioning models according to the least capable client, resulting in a significant portion of the computational load being offloaded to a more capable server-side infrastructure, thereby incurring substantial training overheads. This work proposes a novel method for split federated learning targeting heterogeneous endpoints to address these challenges. The method addresses the problem of heterogeneous training across different clients by adding auxiliary layers, enhances the accuracy of heterogeneous model split training using self-distillation techniques, and leverages the global model from the previous round to mitigate the accuracy degradation during federated aggregation. We conducted validations on the CIFAR-10 dataset and compared it with the existing SL, SFLV1, and SFLV2 methods; our HFSL2 method improved by 3.81%, 13.94%, and 6.19%, respectively. Validations were also carried out on the HAM10000, FashionMNIST, and MNIST datasets, through which we found that our algorithm can effectively enhance the aggregation accuracy of heterogeneous computing capabilities. |
---|---|
ISSN: | 0920-8542 1573-0484 |
DOI: | 10.1007/s11227-024-06632-6 |