Emergent Communication in Multi-Agent Reinforcement Learning for Future Wireless Networks
In different wireless network scenarios, multiple network entities need to cooperate in order to achieve a common task with minimum delay and energy consumption. Future wireless networks mandate exchanging high dimensional data in dynamic and uncertain environments, therefore implementing communicat...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In different wireless network scenarios, multiple network entities need to
cooperate in order to achieve a common task with minimum delay and energy
consumption. Future wireless networks mandate exchanging high dimensional data
in dynamic and uncertain environments, therefore implementing communication
control tasks becomes challenging and highly complex. Multi-agent reinforcement
learning with emergent communication (EC-MARL) is a promising solution to
address high dimensional continuous control problems with partially observable
states in a cooperative fashion where agents build an emergent communication
protocol to solve complex tasks. This paper articulates the importance of
EC-MARL within the context of future 6G wireless networks, which imbues
autonomous decision-making capabilities into network entities to solve complex
tasks such as autonomous driving, robot navigation, flying base stations
network planning, and smart city applications. An overview of EC-MARL
algorithms and their design criteria are provided while presenting use cases
and research opportunities on this emerging topic. |
---|---|
DOI: | 10.48550/arxiv.2309.06021 |