Toward Efficient Federated Learning over Wireless Networks: Novel Frontiers in Resource Optimization

With the rise of the Internet of Things (IoT) and 5G networks, edge computing addresses critical limitations in cloud computing’s quality of service . Machine learning (ML) has become essential in processing IoT-generated data at the edge, primarily through distributed optimization algorithms that s...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Mahmoudi, Afsaneh
Format: Dissertation
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:With the rise of the Internet of Things (IoT) and 5G networks, edge computing addresses critical limitations in cloud computing’s quality of service . Machine learning (ML) has become essential in processing IoT-generated data at the edge, primarily through distributed optimization algorithms that support predictive tasks. However, state-of-the-art ML models demand substantial computational and communication resources, often exceeding the capabilities of wireless devices. Moreover, training these models typically requires centralized access to datasets, but transmitting such data to the cloud introduces significant communication overhead, posing a critical challenge to resource-constrained systems. Federated Learning (FL) is a promising iterative approach that reduces communication costs through local computation on devices, where only model parameters are shared with a central server. Accordingly, every communication iteration of FL experiences costs such as computation, latency, bandwidth, and energy. Although FL enables distributed learning across multiple devices without exchanging raw data, its success is often hindered by the limitations of wireless communication overhead, including traffic congestion, and device resource constraints. To address these challenges, this thesis presents cost-effective methods for making FL training more efficient in resource-constrained wireless environments. Initially, we investigate challenges in distributed training over wireless networks, addressing background traffic and latency that impede communication iterations. We introduce the cost-aware causal FL algorithm (FedCau), which balances training performance with communication and computation costs through a novel iteration-termination method, removing the need for future information. A multi-objective optimization problem is formulated, integrating FL loss and iteration costs, with communication managed via slotted-ALOHA, CSMA/CA, and OFDMA protocols. The framework is extended to include both convex and non-convex loss functions, and results are compared with established communication-efficient methods, including heavily Aggregated Quantized Gradient (LAQ). Additionally, we develop ALAQ(Adaptive LAQ), which conserves energy while maintaining high test accuracy by dynamically adjusting bit allocation for local model updates during iterations . Next, we leverage cell-free massive multiple-input multiple-output (CFm-MIMO) networks to address the high latency in large