TrajDeleter: Enabling Trajectory Forgetting in Offline Reinforcement Learning Agents
Reinforcement learning (RL) trains an agent from experiences interacting with the environment. In scenarios where online interactions are impractical, offline RL, which trains the agent using pre-collected datasets, has become popular. While this new paradigm presents remarkable effectiveness across...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning (RL) trains an agent from experiences interacting with
the environment. In scenarios where online interactions are impractical,
offline RL, which trains the agent using pre-collected datasets, has become
popular. While this new paradigm presents remarkable effectiveness across
various real-world domains, like healthcare and energy management, there is a
growing demand to enable agents to rapidly and completely eliminate the
influence of specific trajectories from both the training dataset and the
trained agents. To meet this problem, this paper advocates Trajdeleter, the
first practical approach to trajectory unlearning for offline RL agents. The
key idea of Trajdeleter is to guide the agent to demonstrate deteriorating
performance when it encounters states associated with unlearning trajectories.
Simultaneously, it ensures the agent maintains its original performance level
when facing other remaining trajectories. Additionally, we introduce
Trajauditor, a simple yet efficient method to evaluate whether Trajdeleter
successfully eliminates the specific trajectories of influence from the offline
RL agent. Extensive experiments conducted on six offline RL algorithms and
three tasks demonstrate that Trajdeleter requires only about 1.5% of the time
needed for retraining from scratch. It effectively unlearns an average of 94.8%
of the targeted trajectories yet still performs well in actual environment
interactions after unlearning. The replication package and agent parameters are
available online. |
---|---|
DOI: | 10.48550/arxiv.2404.12530 |