No Representation, No Trust: Connecting Representation, Collapse, and Trust Issues in PPO
Reinforcement learning (RL) is inherently rife with non-stationarity since the states and rewards the agent observes during training depend on its changing policy. Therefore, networks in deep RL must be capable of adapting to new observations and fitting new targets. However, previous works have obs...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning (RL) is inherently rife with non-stationarity since
the states and rewards the agent observes during training depend on its
changing policy. Therefore, networks in deep RL must be capable of adapting to
new observations and fitting new targets. However, previous works have observed
that networks trained under non-stationarity exhibit an inability to continue
learning, termed loss of plasticity, and eventually a collapse in performance.
For off-policy deep value-based RL methods, this phenomenon has been correlated
with a decrease in representation rank and the ability to fit random targets,
termed capacity loss. Although this correlation has generally been attributed
to neural network learning under non-stationarity, the connection to
representation dynamics has not been carefully studied in on-policy policy
optimization methods. In this work, we empirically study representation
dynamics in Proximal Policy Optimization (PPO) on the Atari and MuJoCo
environments, revealing that PPO agents are also affected by feature rank
deterioration and capacity loss. We show that this is aggravated by stronger
non-stationarity, ultimately driving the actor's performance to collapse,
regardless of the performance of the critic. We ask why the trust region,
specific to methods like PPO, cannot alleviate or prevent the collapse and find
a connection between representation collapse and the degradation of the trust
region, one exacerbating the other. Finally, we present Proximal Feature
Optimization (PFO), a novel auxiliary loss that, along with other
interventions, shows that regularizing the representation dynamics mitigates
the performance collapse of PPO agents. |
---|---|
DOI: | 10.48550/arxiv.2405.00662 |