Mobility-Aware Cluster Federated Learning in Hierarchical Wireless Networks

Implementing federated learning (FL) algorithms in wireless networks has garnered a wide range of attention. However, few works have considered the impact of user mobility on the learning performance. To fill this research gap, we develop a theoretical model to characterize the hierarchical federate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on wireless communications 2022-10, Vol.21 (10), p.8441-8458
Hauptverfasser: Feng, Chenyuan, Yang, Howard H., Hu, Deshun, Zhao, Zhiwei, Quek, Tony Q. S., Min, Geyong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Implementing federated learning (FL) algorithms in wireless networks has garnered a wide range of attention. However, few works have considered the impact of user mobility on the learning performance. To fill this research gap, we develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks where the mobile users may roam across edge access points (APs), leading to incompletion of inconsistent FL training. We provide the convergence analysis of conventional HFL with user mobility. Our analysis proves that the learning performance of conventional HFL deteriorates drastically with highly-mobile users. And such a decline in the learning performance will be exacerbated with small number of participants and large data distribution divergences among users' local data. To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm by redesigning the access mechanism, local update rule, and model aggregation scheme. We also conduct experiments to evaluate the learning performance of conventional HFL, a cluster federated learning (CFL) with simple averaging, and our proposed MACFL. The results show that our MACFL can enhance the learning performance, especially for three different cases: ( i ) the case of users with non-independent and identically distributed (non-IID) data, ( ii ) the case of users with high mobility, and ( iii ) the case with a small number of users.
ISSN:1536-1276
1558-2248
DOI:10.1109/TWC.2022.3166386