Using Deep Reinforcement Learning for mmWave Real-Time Scheduling
We study the problem of real-time scheduling in a multi-hop millimeter-wave (mmWave) mesh. We develop a model-free deep reinforcement learning algorithm called Adaptive Activator RL (AARL), which determines the subset of mmWave links that should be activated during each time slot and the power level...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the problem of real-time scheduling in a multi-hop millimeter-wave
(mmWave) mesh. We develop a model-free deep reinforcement learning algorithm
called Adaptive Activator RL (AARL), which determines the subset of mmWave
links that should be activated during each time slot and the power level for
each link. The most important property of AARL is its ability to make
scheduling decisions within the strict time slot constraints of typical 5G
mmWave networks. AARL can handle a variety of network topologies, network
loads, and interference models, it can also adapt to different workloads. We
demonstrate the operation of AARL on several topologies: a small topology with
10 links, a moderately-sized mesh with 48 links, and a large topology with 96
links. We show that for each topology, we compare the throughput obtained by
AARL to that of a benchmark algorithm called RPMA (Residual Profit Maximizer
Algorithm). The most important advantage of AARL compared to RPMA is that it is
much faster and can make the necessary scheduling decisions very rapidly during
every time slot, while RPMA cannot. In addition, the quality of the scheduling
decisions made by AARL outperforms those made by RPMA. |
---|---|
DOI: | 10.48550/arxiv.2210.01423 |