Adaptive stepsize selection for online Q-learning in a non-stationary environment
We consider the problem of real-time control of a discrete-time Markov decision process (MDP) in a non-stationary environment, which is characterized by large, sudden changes in the parameters of the MDP. We consider here an online version of the well-known Q-learning algorithm, which operates direc...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We consider the problem of real-time control of a discrete-time Markov decision process (MDP) in a non-stationary environment, which is characterized by large, sudden changes in the parameters of the MDP. We consider here an online version of the well-known Q-learning algorithm, which operates directly in its target environment. In order to track changes, the stepsizes (or learning rates) must be bounded away from zero. In this paper, we show how the theory of constant stepsize stochastic approximation algorithms can be used to motivate and develop an adaptive stepsize algorithm, that is appropriate for the online learning scenario described above. Our algorithm automatically achieves a desirable balance between accuracy and rate of reaction, and seeks to track the optimal policy with some pre-determined level of confidence |
---|---|
DOI: | 10.1109/WODES.2006.382396 |