Multi-granularity Weighted Federated Learning for Heterogeneous Edge Computing
Federated learning (FL), an advanced variant of distributed machine learning, enables clients to collaboratively train a model without sharing raw data, thereby enhancing privacy, security, and reducing communication overhead. However, in edge computing scenarios, there is an increasing trend toward...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on services computing 2024-11, p.1-17 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated learning (FL), an advanced variant of distributed machine learning, enables clients to collaboratively train a model without sharing raw data, thereby enhancing privacy, security, and reducing communication overhead. However, in edge computing scenarios, there is an increasing trend towards diversity, heterogeneity, and complexity in clients' data and models. The fundamental challenges, such as non-independent and identically distributed (non-IID) data and multi-granularity data accompanied by model heterogeneity, have become more evident and pose challenges to collaborative training among clients. In this paper, we refine the FL framework and propose the Multi-granularity Weighted Federated Learning (MGW-FL), emphasizing efficient collaborative training among clients with varied data granularities and diverse model scales across distinct data distributions. We introduce a distance-based FL mechanism designed for homogeneous clients, providing personalized models to mitigate the negative effects that non-IID data might have on model aggregation. Simultaneously, we propose an attention-weighted FL mechanism enhanced by a prior attention mechanism, facilitating knowledge transfer across clients with heterogeneous data granularities and model scales. Furthermore, we provide theoretical analyses of the convergence properties of the proposed MGW-FL method for both convex and non-convex models. Experimental results on five benchmark datasets demonstrate that, compared to baseline methods, MGW-FL significantly improves accuracy by almost 150% and convergence efficiency by nearly 20% on both IID and non-IID data. |
---|---|
ISSN: | 1939-1374 2372-0204 |
DOI: | 10.1109/TSC.2024.3495532 |