HeteFL: Network-Aware Federated Learning Optimization in Heterogeneous MEC-Enabled Internet of Things

Federated learning (FL) is an effective paradigm for training a machine-learning model based on data distributed at a large quantity of users in Internet of Things (IoT) without sharing their raw data. However, federated optimization of the global model in heterogeneous IoT-while considering the het...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal 2022-08, Vol.9 (15), p.14073-14086
Hauptverfasser: He, Jing, Guo, Songtao, Qiao, Dewen, Yi, Lin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated learning (FL) is an effective paradigm for training a machine-learning model based on data distributed at a large quantity of users in Internet of Things (IoT) without sharing their raw data. However, federated optimization of the global model in heterogeneous IoT-while considering the heterogeneity among users and limited network constraints-remains to be an open challenge. In this article, we propose a novel adaptive federated optimization algorithm, Adp-FedProx, to achieve the optimal learning performance within the limited computation and communication resources at the edge. In particular, we analyze how the training loss is affected by each user's global update frequency and the time and energy used for learning by obtaining the novel convergence bound of federated training loss in heterogeneous IoT. With our proposed algorithm, all users can dynamically adjust their number of local iterations in each global interval and will not drop out during the training process for resource exhaustion, so as to impair the negative effect of heterogeneity among users and guarantee the convergence of the training model. In addition, we can get the optimal learning performance by minimizing the gap between the final loss function and the optimal one within limited resources. Finally, extensive numerical results demonstrate the algorithmic advantages in adapting system heterogeneity and admirable performance of the proposed methodologies in speeding up FL 5%-10% and reduce the energy consumption in training about 10% compared with FedProx.
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2022.3145360