Reward-Adaptive Reinforcement Learning: Dynamic Policy Gradient Optimization for Bipedal Locomotion
Controlling a non-statically bipedal robot is challenging due to the complex dynamics and multi-criterion optimization involved. Recent works have demonstrated the effectiveness of deep reinforcement learning (DRL) for simulation and physical robots. In these methods, the rewards from different crit...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Controlling a non-statically bipedal robot is challenging due to the complex
dynamics and multi-criterion optimization involved. Recent works have
demonstrated the effectiveness of deep reinforcement learning (DRL) for
simulation and physical robots. In these methods, the rewards from different
criteria are normally summed to learn a single value function. However, this
may cause the loss of dependency information between hybrid rewards and lead to
a sub-optimal policy. In this work, we propose a novel reward-adaptive
reinforcement learning for biped locomotion, allowing the control policy to be
simultaneously optimized by multiple criteria using a dynamic mechanism. The
proposed method applies a multi-head critic to learn a separate value function
for each reward component. This leads to hybrid policy gradient. We further
propose dynamic weight, allowing each component to optimize the policy with
different priorities. This hybrid and dynamic policy gradient (HDPG) design
makes the agent learn more efficiently. We show that the proposed method
outperforms summed-up-reward approaches and is able to transfer to physical
robots. The sim-to-real and MuJoCo results further demonstrate the
effectiveness and generalization of HDPG. |
---|---|
DOI: | 10.48550/arxiv.2107.01908 |