Cost-Efficient and Skew-Aware Data Scheduling for Incremental Learning in 5G Networks

To facilitate the emerging applications in 5G networks, mobile network operators will provide many network functions in terms of control and prediction. Recently, they have recognized the power of machine learning (ML) and started to explore its potential to facilitate those network functions. Never...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal on selected areas in communications 2022-02, Vol.40 (2), p.578-595
Hauptverfasser: Pu, Lingjun, Yuan, Xinjing, Xu, Xiaohang, Chen, Xu, Zhou, Pan, Xu, Jingdong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:To facilitate the emerging applications in 5G networks, mobile network operators will provide many network functions in terms of control and prediction. Recently, they have recognized the power of machine learning (ML) and started to explore its potential to facilitate those network functions. Nevertheless, the current ML models for network functions are often derived in an offline manner, which is inefficient due to the excessive overhead for transmitting a huge volume of dataset to remote ML training clouds and failing to provide the incremental learning capability for the continuous model updating. As an alternative solution, we propose Cocktail , an incremental learning framework within a reference 5G network architecture. To achieve cost efficiency while increasing trained model accuracy, an efficient online data scheduling policy is essential. To this end, we formulate an online data scheduling problem to optimize the framework cost while alleviating the data skew issue caused by the capacity heterogeneity of training workers from the long-term perspective. We exploit the stochastic gradient descent to devise an online asymptotically optimal algorithm, including two optimal policies based on novel graph constructions for skew-aware data collection and data training. Small-scale testbed and large-scale simulations validate the superior performance of our proposed framework.
ISSN:0733-8716
1558-0008
DOI:10.1109/JSAC.2021.3118430