Accelerating Reinforcement Learning with Value-Conditional State Entropy Exploration
A promising technique for exploration is to maximize the entropy of visited state distribution, i.e., state entropy, by encouraging uniform coverage of visited state space. While it has been effective for an unsupervised setup, it tends to struggle in a supervised setup with a task reward, where an...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A promising technique for exploration is to maximize the entropy of visited
state distribution, i.e., state entropy, by encouraging uniform coverage of
visited state space. While it has been effective for an unsupervised setup, it
tends to struggle in a supervised setup with a task reward, where an agent
prefers to visit high-value states to exploit the task reward. Such a
preference can cause an imbalance between the distributions of high-value
states and low-value states, which biases exploration towards low-value state
regions as a result of the state entropy increasing when the distribution
becomes more uniform. This issue is exacerbated when high-value states are
narrowly distributed within the state space, making it difficult for the agent
to complete the tasks. In this paper, we present a novel exploration technique
that maximizes the value-conditional state entropy, which separately estimates
the state entropies that are conditioned on the value estimates of each state,
then maximizes their average. By only considering the visited states with
similar value estimates for computing the intrinsic bonus, our method prevents
the distribution of low-value states from affecting exploration around
high-value states, and vice versa. We demonstrate that the proposed alternative
to the state entropy baseline significantly accelerates various reinforcement
learning algorithms across a variety of tasks within MiniGrid, DeepMind Control
Suite, and Meta-World benchmarks. Source code is available at
https://sites.google.com/view/rl-vcse. |
---|---|
DOI: | 10.48550/arxiv.2305.19476 |