A Q-learning Control Method for a Soft Robotic Arm Utilizing Training Data from a Rough Simulator
It is challenging to control a soft robot, where reinforcement learning methods have been applied with promising results. However, due to the poor sample efficiency, reinforcement learning methods require a large collection of training data, which limits their applications. In this paper, we propose...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It is challenging to control a soft robot, where reinforcement learning
methods have been applied with promising results. However, due to the poor
sample efficiency, reinforcement learning methods require a large collection of
training data, which limits their applications. In this paper, we propose a
Q-learning controller for a physical soft robot, in which pre-trained models
using data from a rough simulator are applied to improve the performance of the
controller. We implement the method on our soft robot, i.e., Honeycomb
Pneumatic Network (HPN) arm. The experiments show that the usage of pre-trained
models can not only reduce the amount of the real-world training data, but also
greatly improve its accuracy and convergence rate. |
---|---|
DOI: | 10.48550/arxiv.2109.05795 |