Reinforcement Learning Transfer Based on Subgoal Discovery and Subtask Similarity
This paper studies the problem of transfer learning in the context of reinforcement learning.We propose a novel transfer learning method that can speed up reinforcement learning with the aid of previously learnt tasks.Before performing extensive learning episodes,our method attempts to analyze the l...
Gespeichert in:
Veröffentlicht in: | IEEE/CAA journal of automatica sinica 2014-07, Vol.1 (3), p.257-266 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper studies the problem of transfer learning in the context of reinforcement learning.We propose a novel transfer learning method that can speed up reinforcement learning with the aid of previously learnt tasks.Before performing extensive learning episodes,our method attempts to analyze the learning task via some exploration in the environment,and then attempts to reuse previous learning experience whenever it is possible and appropriate.In particular,our proposed method consists of four stages:1) subgoal discovery,2) option construction,3) similarity searching,and 4) option reusing.Especially,in order to fulfill the task of identifying similar options,we propose a novel similarity measure between options,which is built upon the intuition that similar options have similar stateaction probabilities.We examine our algorithm using extensive experiments,comparing it with existing methods.The results show that our method outperforms conventional non-transfer reinforcement learning algorithms,as well as existing transfer learning methods,by a wide margin. |
---|---|
ISSN: | 2329-9266 2329-9274 |
DOI: | 10.1109/JAS.2014.7004683 |