LLQL: Logistic Likelihood Q-Learning for Reinforcement Learning
Modern reinforcement learning (RL) can be categorized into online and offline variants. As a pivotal aspect of both online and offline RL, current research on the Bellman equation revolves primarily around optimization techniques and performance enhancement rather than exploring the inherent structu...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Modern reinforcement learning (RL) can be categorized into online and offline
variants. As a pivotal aspect of both online and offline RL, current research
on the Bellman equation revolves primarily around optimization techniques and
performance enhancement rather than exploring the inherent structural
properties of the Bellman error, such as its distribution characteristics. This
study investigates the distribution of the Bellman approximation error through
iterative exploration of the Bellman equation with the observation that the
Bellman error approximately follows the Logistic distribution. Based on this,
we proposed the utilization of the Logistic maximum likelihood function (LLoss)
as an alternative to the commonly used mean squared error (MSELoss) that
assumes a Normal distribution for Bellman errors. We validated the hypotheses
through extensive numerical experiments across diverse online and offline
environments. In particular, we applied the Logistic correction to loss
functions in various RL baseline methods and observed that the results with
LLoss consistently outperformed the MSE counterparts. We also conducted the
Kolmogorov-Smirnov tests to confirm the reliability of the Logistic
distribution. Moreover, our theory connects the Bellman error to the
proportional reward scaling phenomenon by providing a distribution-based
analysis. Furthermore, we applied the bias-variance decomposition for sampling
from the Logistic distribution. The theoretical and empirical insights of this
study lay a valuable foundation for future investigations and enhancements
centered on the distribution of Bellman error. |
---|---|
DOI: | 10.48550/arxiv.2307.02345 |