Using Forwards-Backwards Models to Approximate MDP Homomorphisms
Reinforcement learning agents must painstakingly learn through trial and error what sets of state-action pairs are value equivalent -- requiring an often prohibitively large amount of environment experience. MDP homomorphisms have been proposed that reduce the MDP of an environment to an abstract MD...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning agents must painstakingly learn through trial and
error what sets of state-action pairs are value equivalent -- requiring an
often prohibitively large amount of environment experience. MDP homomorphisms
have been proposed that reduce the MDP of an environment to an abstract MDP,
enabling better sample efficiency. Consequently, impressive improvements have
been achieved when a suitable homomorphism can be constructed a priori --
usually by exploiting a practitioner's knowledge of environment symmetries. We
propose a novel approach to constructing homomorphisms in discrete action
spaces, which uses a learnt model of environment dynamics to infer which
state-action pairs lead to the same state -- which can reduce the size of the
state-action space by a factor as large as the cardinality of the original
action space. In MinAtar, we report an almost 4x improvement over a value-based
off-policy baseline in the low sample limit, when averaging over all games and
optimizers. |
---|---|
DOI: | 10.48550/arxiv.2209.06356 |