Hierarchical LTL-Task MDPs for Multi-Agent Coordination through Auctioning and Learning

Given a temporal behavior specification and a team of agents available for execution in a stochastic environment, it is still an open problem how to efficiently decompose and allocate the specification to the agents while coordinating their actions accordingly and while considering long-term perform...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:The International journal of robotics research 2019
Hauptverfasser: Schillinger, Philipp, Bürger, Mathias, Dimarogonas, Dimos V.
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Given a temporal behavior specification and a team of agents available for execution in a stochastic environment, it is still an open problem how to efficiently decompose and allocate the specification to the agents while coordinating their actions accordingly and while considering long-term performance under uncertain external events. Our proposed framework works towards this goal by constructing a so-called hierachical LTL-Task MDP automatically by formally decomposing a given temporal logic goal specification into a set of smaller MDP planning problems. In order to efficiently find a multi-agent policy in this generated LTL-Task MDP, we combine methods from planning under uncertainty and auction-based task allocation with techniques from reinforcement learning. A particular challenge is to consider uncertainty in the environment, which might require significant additional effort to satisfy the given specification. This is addressed here by a formalism that allows the agents to consider preparation of such possible future reactions instead of updating the set of tasks only after observing an event.
ISSN:0278-3649
1741-3176