Path Planning Problems with Side Observations-When Colonels Play Hide-and-Seek
Resource allocation games such as the famous Colonel Blotto (CB) and Hide-and-Seek (HS) games are often used to model a large variety of practical problems, but only in their one-shot versions. Indeed, due to their extremely large strategy space, it remains an open question how one can efficiently l...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Resource allocation games such as the famous Colonel Blotto (CB) and
Hide-and-Seek (HS) games are often used to model a large variety of practical
problems, but only in their one-shot versions. Indeed, due to their extremely
large strategy space, it remains an open question how one can efficiently learn
in these games. In this work, we show that the online CB and HS games can be
cast as path planning problems with side-observations (SOPPP): at each stage, a
learner chooses a path on a directed acyclic graph and suffers the sum of
losses that are adversarially assigned to the corresponding edges; and she then
receives semi-bandit feedback with side-observations (i.e., she observes the
losses on the chosen edges plus some others). We propose a novel algorithm,
EXP3-OE, the first-of-its-kind with guaranteed efficient running time for SOPPP
without requiring any auxiliary oracle. We provide an expected-regret bound of
EXP3-OE in SOPPP matching the order of the best benchmark in the literature.
Moreover, we introduce additional assumptions on the observability model under
which we can further improve the regret bounds of EXP3-OE. We illustrate the
benefit of using EXP3-OE in SOPPP by applying it to the online CB and HS games. |
---|---|
DOI: | 10.48550/arxiv.1905.11151 |