Snowball Effect in Federated Learning: An Approach of Exponentially Expanding Structures for Optimizing the Training Efficiency

Efficient federated learning (FL) in mobile edge networks faces challenges due to energy-consuming on-device training and wireless transmission. Optimizing the neural network structures is an effective approach to achieving energy savings. In this paper, we present a Snowball FL training with expand...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cognitive communications and networking 2024-10, p.1-1
Hauptverfasser: Cheng, Guoliang, Li, Peichun, Tan, Beihai, Yu, Rong, Wu, Yuan, Pan, Miao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Efficient federated learning (FL) in mobile edge networks faces challenges due to energy-consuming on-device training and wireless transmission. Optimizing the neural network structures is an effective approach to achieving energy savings. In this paper, we present a Snowball FL training with expanding neural network structure, which starts with a small-sized submodel and gradually progresses to a full-sized model. To achieve this, we first design the submodel and embedding extraction schemes for fine-grained model structure expansion. We then investigate the joint minimization problem of the global training loss and system-wise energy consumption. After that, we decompose the optimization problem into a long-term model structure expansion subproblem and a single-round local resource allocation subproblem. Specifically, the former subproblem is transformed into a variational calculus problem by leveraging theoretical analysis of the convergence bound. The Euler-Lagrange method is used to derive the solution, where the optimal evolution strategy for the model structure exponentially increases with the global round (i.e., the Snowball effect). Meanwhile, the latter subproblem is solved by convex optimization to acquire the optimal computing frequency and transmission power. Experiments indicate that the proposed framework can save about 50% of energy consumption to achieve on-par accuracy against state-of-the-art algorithms.
ISSN:2332-7731
2332-7731
DOI:10.1109/TCCN.2024.3480045