Obstacle-Avoidable Robotic Motion Planning Framework Based on Deep Reinforcement Learning

Although robotic trajectory generation has been extensively studied, the motion planning in environments with obstacles still faces some open issues and is yet to be explored. In this article, a universal motion planning framework based on deep reinforcement learning (DRL) is proposed to achieve aut...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE/ASME transactions on mechatronics 2024-12, Vol.29 (6), p.4377-4388
Hauptverfasser: Liu, Huashan, Ying, Fengkang, Jiang, Rongxin, Shan, Yinghao, Shen, Bo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although robotic trajectory generation has been extensively studied, the motion planning in environments with obstacles still faces some open issues and is yet to be explored. In this article, a universal motion planning framework based on deep reinforcement learning (DRL) is proposed to achieve autonomous obstacle avoidance for robotic tasks. First, a prophet-guided actor-critic structure based on the expert strategy is designed, which can realize prompt replanning when the task scenario changes. Second, an expansive dual-memory sampling mechanism is proposed to efficiently augment expert data from only a few demonstrations. It also improves the training efficiency of DRL algorithms through an increasingly unbiased sampling rule. Third, a composite obstacle-avoidable reward system is designed to achieve collision-free motion for both a robot's end effector and its body/link. It can build a dense reward map, and strike a balance between obstacle avoidance and action exploration. Finally, experimental results have validated the performance of the proposed work in three different scenes.
ISSN:1083-4435
1941-014X
DOI:10.1109/TMECH.2024.3377002