LORM: Learning to Optimize for Resource Management in Wireless Networks With Few Training Samples

Effective resource management plays a pivotal role in wireless networks, which, unfortunately, typically results in challenging mixed-integer nonlinear programming (MINLP) problems. Machine learning-based methods have recently emerged as a disruptive way to obtain near-optimal performance for MINLPs...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on wireless communications 2020-01, Vol.19 (1), p.665-679
Hauptverfasser: Shen, Yifei, Shi, Yuanming, Zhang, Jun, Letaief, Khaled B.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Effective resource management plays a pivotal role in wireless networks, which, unfortunately, typically results in challenging mixed-integer nonlinear programming (MINLP) problems. Machine learning-based methods have recently emerged as a disruptive way to obtain near-optimal performance for MINLPs with affordable computational complexity. There have been some attempts in applying such methods to resource management in wireless networks, but these attempts require huge amounts of training samples and lack the capability to handle constrained problems. Furthermore, they suffer from severe performance deterioration when the network parameters change, which commonly happens and is referred to as the task mismatch problem. In this paper, to reduce the sample complexity and address the feasibility issue, we propose a framework of Learning to Optimize for Resource Management (LORM). In contrast to the end-to-end learning approach adopted in previous studies, LORM learns the optimal pruning policy in the branch-and-bound algorithm for MINLPs via a sample-efficient method, namely, imitation learning. To further address the task mismatch problem, we develop a transfer learning method via self-imitation in LORM, named LORM-TL, which can quickly adapt a pre-trained machine learning model to the new task with only a few additional unlabeled training samples. Numerical simulations demonstrate that LORM outperforms specialized state-of-the-art algorithms and achieves near-optimal performance, while providing significant speedup compared with the branch-and-bound algorithm. Moreover, LORM-TL, by relying on a few unlabeled samples, achieves comparable performance with the model trained from scratch with sufficient labeled samples.
ISSN:1536-1276
1558-2248
DOI:10.1109/TWC.2019.2947591