An Empirical Investigation of Value-Based Multi-objective Reinforcement Learning for Stochastic Environments
One common approach to solve multi-objective reinforcement learning (MORL) problems is to extend conventional Q-learning by using vector Q-values in combination with a utility function. However issues can arise with this approach in the context of stochastic environments, particularly when optimisin...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | One common approach to solve multi-objective reinforcement learning (MORL)
problems is to extend conventional Q-learning by using vector Q-values in
combination with a utility function. However issues can arise with this
approach in the context of stochastic environments, particularly when
optimising for the Scalarised Expected Reward (SER) criterion. This paper
extends prior research, providing a detailed examination of the factors
influencing the frequency with which value-based MORL Q-learning algorithms
learn the SER-optimal policy for an environment with stochastic state
transitions. We empirically examine several variations of the core
multi-objective Q-learning algorithm as well as reward engineering approaches,
and demonstrate the limitations of these methods. In particular, we highlight
the critical impact of the noisy Q-value estimates issue on the stability and
convergence of these algorithms. |
---|---|
DOI: | 10.48550/arxiv.2401.03163 |