An Attention Mechanism for Deep Q-Networks with Applications in Robotic Pushing
Humans effortlessly solve push tasks in everyday life but unlocking these capabilities remains a research challenge in robotics. Physical models are often inaccurate or unattainable. State-of-the-art data-driven approaches learn to compensate for these inaccuracies or get rid of the approximated phy...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Web Resource |
Sprache: | eng |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Humans effortlessly solve push tasks in everyday life but unlocking these capabilities remains a research challenge in robotics. Physical models are often inaccurate or unattainable. State-of-the-art data-driven approaches learn to compensate for these inaccuracies or get rid of the approximated physical models altogether. Nevertheless, data-driven approaches such as Deep Q-Networks (DQNs) get frequently stuck in local optima in large state-action spaces. We propose an attention mechanism for DQNs to improve their sampling efficiency and demonstrate in simulation experiments with a UR5 robot arm that such a mechanism helps the DQN learn faster and achieve higher performance in a push task involving objects with unknown dynamics. |
---|