Deep Policy Gradient Methods Without Batch Updates, Target Networks, or Replay Buffers
Modern deep policy gradient methods achieve effective performance on simulated robotic tasks, but they all require large replay buffers or expensive batch updates, or both, making them incompatible for real systems with resource-limited computers. We show that these methods fail catastrophically whe...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Modern deep policy gradient methods achieve effective performance on
simulated robotic tasks, but they all require large replay buffers or expensive
batch updates, or both, making them incompatible for real systems with
resource-limited computers. We show that these methods fail catastrophically
when limited to small replay buffers or during incremental learning, where
updates only use the most recent sample without batch updates or a replay
buffer. We propose a novel incremental deep policy gradient method -- Action
Value Gradient (AVG) and a set of normalization and scaling techniques to
address the challenges of instability in incremental learning. On robotic
simulation benchmarks, we show that AVG is the only incremental method that
learns effectively, often achieving final performance comparable to batch
policy gradient methods. This advancement enabled us to show for the first time
effective deep reinforcement learning with real robots using only incremental
updates, employing a robotic manipulator and a mobile robot. |
---|---|
DOI: | 10.48550/arxiv.2411.15370 |