Oppositional extension of reinforcement learning techniques

In this paper, we present different opposition schemes for four reinforcement learning methods: Q-learning, Q(λ), Sarsa, and Sarsa(λ) under assumptions that are reasonable for many real-world problems where type-II opposites generally better reflect the nature of the problem at hand. It appears that...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences 2014-08, Vol.275, p.101-114
Hauptverfasser: Mahootchi, M., Tizhoosh, H.R., Ponnambalam, K.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we present different opposition schemes for four reinforcement learning methods: Q-learning, Q(λ), Sarsa, and Sarsa(λ) under assumptions that are reasonable for many real-world problems where type-II opposites generally better reflect the nature of the problem at hand. It appears that the aggregation of opposition-based schemes with regular learning methods can significantly speed up the learning process, especially where the number of observations is small or the state space is large. We verify the performance of the proposed methods using two different applications: a grid-world problem and a single water reservoir management problem.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2014.02.024