Stochastic Fairness and Language-Theoretic Fairness in Planning on Nondeterministic Domains
We address two central notions of fairness in the literature of planning on nondeterministic fully observable domains. The first, which we call stochastic fairness, is classical, and assumes an environment which operates probabilistically using possibly unknown probabilities. The second, which is la...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We address two central notions of fairness in the literature of planning on
nondeterministic fully observable domains. The first, which we call stochastic
fairness, is classical, and assumes an environment which operates
probabilistically using possibly unknown probabilities. The second, which is
language-theoretic, assumes that if an action is taken from a given state
infinitely often then all its possible outcomes should appear infinitely often
(we call this state-action fairness). While the two notions coincide for
standard reachability goals, they diverge for temporally extended goals. This
important difference has been overlooked in the planning literature, and we
argue has led to confusion in a number of published algorithms which use
reductions that were stated for state-action fairness, for which they are
incorrect, while being correct for stochastic fairness. We remedy this and
provide an optimal sound and complete algorithm for solving state-action fair
planning for LTL/LTLf goals, as well as a correct proof of the lower bound of
the goal-complexity (our proof is general enough that it provides new proofs
also for the no-fairness and stochastic-fairness cases). Overall, we show that
stochastic fairness is better behaved than state-action fairness. |
---|---|
DOI: | 10.48550/arxiv.1912.11203 |