Toward Multi-Agent Reinforcement Learning for Distributed Event-Triggered Control
Event-triggered communication and control provide high control performance in networked control systems without overloading the communication network. However, most approaches require precise mathematical models of the system dynamics, which may not always be available. Model-free learning of commun...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Event-triggered communication and control provide high control performance in
networked control systems without overloading the communication network.
However, most approaches require precise mathematical models of the system
dynamics, which may not always be available. Model-free learning of
communication and control policies provides an alternative. Nevertheless,
existing methods typically consider single-agent settings. This paper proposes
a model-free reinforcement learning algorithm that jointly learns
resource-aware communication and control policies for distributed multi-agent
systems from data. We evaluate the algorithm in a high-dimensional and
nonlinear simulation example and discuss promising avenues for further
research. |
---|---|
DOI: | 10.48550/arxiv.2305.08723 |