Actor-Critic with variable time discretization via sustained actions
Reinforcement learning (RL) methods work in discrete time. In order to apply RL to inherently continuous problems like robotic control, a specific time discretization needs to be defined. This is a choice between sparse time control, which may be easier to train, and finer time control, which may al...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning (RL) methods work in discrete time. In order to apply
RL to inherently continuous problems like robotic control, a specific time
discretization needs to be defined. This is a choice between sparse time
control, which may be easier to train, and finer time control, which may allow
for better ultimate performance. In this work, we propose SusACER, an
off-policy RL algorithm that combines the advantages of different time
discretization settings. Initially, it operates with sparse time discretization
and gradually switches to a fine one. We analyze the effects of the changing
time discretization in robotic control environments: Ant, HalfCheetah, Hopper,
and Walker2D. In all cases our proposed algorithm outperforms state of the art. |
---|---|
DOI: | 10.48550/arxiv.2308.04299 |