Implementation of Reinforcement Learning by transfering sub-goal policies in robot navigation
Although Reinforcement Learning (RL) is one of the most popular learning methods, it suffers from the curse of dimensionality. If the state and action domains of the problem are immense, the learning rate of the agent decreases dramatically and eventually the agent loses the ability to learn. In ord...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng ; tur |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although Reinforcement Learning (RL) is one of the most popular learning methods, it suffers from the curse of dimensionality. If the state and action domains of the problem are immense, the learning rate of the agent decreases dramatically and eventually the agent loses the ability to learn. In order to eliminate the effects of the curse of the dimensionality, researchers typically concentrate on the methods that reduce the complexity of the problems. While some of them model the problem in a hierarchical manner, the others try to transfer the knowledge obtained during the learning process of simpler tasks. While learning from scratch ignores the previous experiences, transferring full knowledge may mislead the agent because of the conflicting requirements. The main goal of this study is to improve the learning rate of the agent by transferring the relevant parts of the knowledge acquired as a result of previous experiences. The main contribution of this study is to merge these two approaches to transfer only the relevant knowledge in a setting. The proposed method is tested on a robot navigation task in a simulated roombased environment. |
---|---|
DOI: | 10.1109/SIU.2013.6531546 |