Multicast Scheduling over Multiple Channels: A Distribution-Embedding Deep Reinforcement Learning Method
Multicasting is an efficient technique for simultaneously transmitting common messages from the base station (BS) to multiple mobile users (MUs). Multicast scheduling over multiple channels, which aims to jointly minimize the energy consumption of the BS and the latency of serving asynchronized requ...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Multicasting is an efficient technique for simultaneously transmitting common
messages from the base station (BS) to multiple mobile users (MUs). Multicast
scheduling over multiple channels, which aims to jointly minimize the energy
consumption of the BS and the latency of serving asynchronized requests from
the MUs, is formulated as an infinite-horizon Markov decision process (MDP)
problem with a large discrete action space, multiple time-varying constraints,
and multiple time-invariant constraints. To address these challenges, this
paper proposes a novel distribution-embedding multi-agent proximal policy
optimization (DE-MAPPO) algorithm, which consists of one modified MAPPO and one
distribution-embedding module: The former one handles the large discrete action
space and time-varying constraints by modifying the structure of the actor
networks and the training kernel of the conventional MAPPO; and the latter one
iteratively adjusts the action distribution to satisfy the time-invariant
constraints. Moreover, a performance upper bound of the considered MDP is
derived by solving a two-step optimization problem. Finally, numerical results
demonstrate that our proposed algorithm outperforms the existing ones and
achieves comparable performance to the derived benchmark. |
---|---|
DOI: | 10.48550/arxiv.2205.09420 |