Humanoid robot control based on reinforcement learning
Many existing methods of reinforcement learning have treated tasks in a discrete low dimensional state space. However, controlling humanoid robots smooth requires a continuous high-dimensional state space. In this paper, to treat the state space, we proposed an adaptive allocation method of basis fu...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Many existing methods of reinforcement learning have treated tasks in a discrete low dimensional state space. However, controlling humanoid robots smooth requires a continuous high-dimensional state space. In this paper, to treat the state space, we proposed an adaptive allocation method of basis functions for reinforcement learning. Up to now, grid or incremental allocation method have been proposed for allocation of basis functions. However, these methods may cause the curse of dimensionality, and fall into local minima. On the other hand, our method avoids local minima which are assessed by the trace of activity of basis functions. That is, if current state is judged to fall into a local minimum, our method eliminates a basis function which affects the state most. Moreover our method learns with a low number of basis functions because of the elimination process. To confirm the effectiveness of our method, we used a maze task to compare our method with an existing method, which has only an allocation process. Moreover, as learning of continuous high-dimensional state spaces, our method was applied to motion control of a humanoid robot. We demonstrate that our method is capable of providing better performance than the existing method. |
---|---|
DOI: | 10.1109/MHS.2004.1421274 |