K-percent Evaluation for Lifelong RL
In continual or lifelong reinforcement learning, access to the environment should be limited. If we aspire to design algorithms that can run for long periods, continually adapting to new, unexpected situations, then we must be willing to deploy our agents without tuning their hyperparameters over th...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In continual or lifelong reinforcement learning, access to the environment
should be limited. If we aspire to design algorithms that can run for long
periods, continually adapting to new, unexpected situations, then we must be
willing to deploy our agents without tuning their hyperparameters over the
agent's entire lifetime. The standard practice in deep RL, and even continual
RL, is to assume unfettered access to the deployment environment for the full
lifetime of the agent. In this paper, we propose a new approach for evaluating
lifelong RL agents where only k percent of the experiment data can be used for
hyperparameter tuning. We then conduct an empirical study of DQN and SAC across
a variety of continuing and non-stationary domains. We find agents generally
perform poorly when restricted to k-percent tuning, whereas several algorithmic
mitigations designed to maintain network plasticity perform surprisingly well. |
---|---|
DOI: | 10.48550/arxiv.2404.02113 |